Compute Express Link (CXL) 2.0 - Comprehensive 4-Day Course
View all Public Courses

All Available Public Courses
PCI Express 5.0 - Comprehensive 5-Day Course
(US Pacific Time, 9am-5pm: 8/28/2023)

DRAM (DDR5/LPDDR5) - Comprehensive 4-Day Course
(US Pacific Time, 9am-5pm: 9/12/2023)

PCI Express 6.0 Update - 2 Day Course
(US Pacific Time, 9am-5pm: 9/20/2023)

NVM Express (NVMe) 1.4 - Comprehensive 2-Day Course
(US Pacific Time, 9am-5pm: 9/21/2023)

Compute Express Link (CXL) 2.0 - Comprehensive 4-Day Course
(US Pacific Time, 9am-5pm: 9/26/2023)

Compute Express Link (CXL) - Fundamentals 1-Day Course
(US Pacific Time, 9am-5pm: 9/26/2023)

USB4 - Comprehensive 4-Day Course
(US Pacific Time, 9am-5pm: 9/26/2023)






Compute Express Link (CXL) 2.0 - Comprehensive 4-Day Course

Location US Pacific Time, 9am-5pm
Date 9/26/2023 - 9/29/2023
Duration 4-Days
Instructor Ravi Budruk
Price $2,995.00

Comprehensive CXL 2.0 self-paced eLearning video course included in training fees 

Comprehensive Compute Express Link (CXL) 2.0 Architecture Course Details:

Compute Express Link (CXL) is a high-bandwidth, low-latency serial bus interconnect between host processors and devices such as accelerators, memory controllers/buffers, and I/O devices. CXL is based on PCI Express® (PCIe®) 5.0 physical layer running at 32 GT/s with x16, x8 and x4 link widths. Degraded modes run at 16 GT/s and 8 GT/s with x2 and x1 link widths.

CXL interconnect adds coherency and memory semantics, thus allowing for its application in heterogeneous processing systems with a variety of host processors, memory subsystems and peripheral devices interconnected. CXL has applications in standard computer systems, Artificial Intelligence, Machine Learning, communication systems, and High Performance Computing. Emerging applications require a diverse mix of CPUs, GPUs, FPGAs, peripherals such as smart NICs, and other accelerators interconnected via an open industry standard protocol with the necessary features which CXL provides. CXL provides a rich set of three protocols that include 1) CXL.io based on PCIe, 2) CXL.cache and 3) CXL.mem semantics. CXL uses the PCIe stack offering full interoperability with PCIe.

MindShare’s comprehensive CXL 2.0 Architecture course provides a solid foundation of platform architectures and use cases of the three CXL protocols with Type 1, Type 2 and Type 3 devices. The course then details the role of the Transaction Layer, Link Layer, ARB/MUX and Flex Bus Logical and Electrical Physical Layer of a CXL port design. We explain enumeration and configuration process during system bring-up with details of configuration registers. Other topics include: switches, reset, manageability, RAS features, power management, performance considerations and compliance testing.

You Will Learn:

  • CXL system architectures with Type 1, Type 2 and Type 3 devices
  • CXL transaction protocol (CXL.io and CXL.cache/mem)
  • CXL port design constituting Transaction, Link, ARB/MUX and Flex Bus Physical Layers
  • Enumeration and initialization issues with configuration register definitions
  • Power management
  • Reliability, Availability, Serviceability (RAS) and error handling features
  • CXL Switch architecture
  • Variety of Resets
  • CXL register architecture

Course Length: 4-Days

US Pacific Time Zone Times:

Start time: 9:00am US Pacific Time
End time: 5:00pm  US Pacific Time, 45min lunch break 12:30-1:15pm

Location:

Virtual-Classroom US Pacific Time Zone, 9am-5pm

Who Should Attend?

This course is hardware-oriented, but is suitable for both hardware design and software engineers given the course covers CXL initialization topics. The course is ideal for RTL-, chip-, system- or system board-level design engineers who need a broad understanding of CXL architecture. The course is also suitable for chip-level and board-level validation engineers.

Course Outline:

  • CXL Features and Architecture Overview
    • Limitations of interconnects that do not support coherency and memory semantics
    • CXL and Flex Bus Link features
    • CXL.io, CXL.cache, CXL.mem protocol overview
    • Type 1 (devices with cache), Type 2 (devices with cache and memory) and Type 3 (memory expander) devices
    • Layered architecture overview
    • Example Transaction Flows
  • CXL Transaction Layer
    • CXL.mem protocol
    • CXL.cache protocol
    • CXL.io protocol
    • Transaction Ordering
  • CXL Link Layer
    • CXL.io Link Layer
    • CXL.cache and CXL.mem common Link Layer
    • Flit packets
    • Link Layer initialization (LLI)
    • CXL.cache/mem packets flow control
    • CXL.cache/mem retry mechanism
    • CXL.cache viral feature
  • CXL ARB/MUX Layer
    • Virtual Link State Machine (vLSM) states
    • ARB/MUX Link Management packets (ALMPs)
  • Flex Bus Physical Layer
    • Protocol ID and Flit packet layout
    • Byte Striping
    • NULL Flits
    • Sync Header Bypass (Latecy Optimization) mode
    • Link training
  • Resets
    • Cold reset
    • Warm reset
    • Hot Reset
    • Function Level reset (FLR)
    • CXL Reset
  • Power Management
    • Runtime control power management
    • Physical Layer power management
    • Latency Tolerance communication
  • RAS and Error Handling
    • RAS features
    • Link Down handling
    • Viral handling
    • Memory Error Firmware Notification (MEFN) feature
  • Enumeration, Manageability and Memory Interleaving
  • CXL Control and Status related registers
    • DVSEC Configuration and Status registers
    • CompiMemory Mapped registers

Recommended Prerequisites:

Complete working knowledge of PCI Express architecture. Computer architecture fundamentals. Cache coherency concepts incuding MESI protocol

Training Materials:

  • Students will be provided with a PDF version of the presentation materials used in class
  • Comprehensive CXL 2.0 self-paced eLearning course to use for course review after course completion