How another CPU communicate with Xavier through PCIE? (Solved)

We are designing a hardware borad, where there are a Intel CPU(with DDRAM connected) and Xavier on it. We want to transfer data between them through PCIE(Intel CPU and Xavier). Is it possible since it’s a CPU, not a device like GPU, ethernet?

Thanks.
Daniel.

You could go with Ethernet or InfiniBand, do you know what your bandwidth requirements are?

Using the PCIe endpoint mode to communicate with RDMA between two Xavier’s will be supported in a future release of JetPack.

For now you could use a 10/40GbE or Infiniband card as jjwadsworth suggested.

I see.Thanks. I still want to know whether I can use the PCIe endpoint mode in Xavier to exchange data with the Intel CPU in my hardware board. If it’s possiable,why do I need to wait for the future release of JetPack, Can I just modify the device tree(Make PCIe in endpoint mode), build the BSP and flash it into Xavier?

Thanks again.

Which controller would you like to use in endpoint mode?
FWIW, there are three controllers C0, C4 and C5 which can operate in end point mode (one at a time) and corresponding device-tree nodes are already available. Please enable
pcie_ep@14180000 for C0
pcie_ep@14160000 for C4
pcie_ep@141a0000 for C5
and disable their corresponding root port nodes
There is a platform driver (pcie-tegra-dw-ep.c) available to configure controller for endpoint mode of operation and also a client driver (tegra-pcie-ep-mem.c) to exercise the DMA functionality of end point controller. You can write your own client driver to satisfy your requirements and modify platform driver also accordingly.

Thanks. Do you have any document to let me know how end point mode work? I don’t understand how this endpoint-mode controller communicate with the outside device. Work as EP under the RC of outside CPU? Am I correct?
BTW, which controller of three is used as RC to link to GPU? If I configure this controller to EP mode, what will happen? GPU can’t work properly?

Interested in the 5Gb (or 10Gb) connection from PC (x86) to Xavier too. If Xavier can work in endpoint mode, does that mean I can wire PC PCIE directly to Xavier PCIE ? Maybe through own carrier board design or through Molex cable/PCIE card?

@vidyas

Does it work like this:

  1. I take tegra-pcie-ep-mem.c, compile it in X86 host.
  2. get pcie-tegra-dw-ep.c to work on Xavier
  3. hook it up via some kinds of pcie cable.
  4. then I will be able to read/write xavier from X86 side?

Do you have examples or documents?
Thanks

[b]Does it work like this:

  1. I take tegra-pcie-ep-mem.c, compile it in X86 host.
  2. get pcie-tegra-dw-ep.c to work on Xavier
  3. hook it up via some kinds of pcie cable.
  4. then I will be able to read/write xavier from X86 side?
    [/b]
    Yes. It would work. Please make sure that the cable you are using would route Tx from one end to Rx of another end. Also, since Xavier is self powered, there is no need to supply power from x86 to Xavier(EP)

If Xavier can work in endpoint mode, does that mean I can wire PC PCIE directly to Xavier PCIE ? Maybe through own carrier board design or through Molex cable/PCIE card?
Yes

BTW, which controller of three is used as RC to link to GPU? If I configure this controller to EP mode, what will happen? GPU can’t work properly?
As far as I know, GPU is not supplied by default. So, whichever controller you have planning to connect GPU to cannot be operated in endpoint mode. Dual mode controller can work either in root port mode or endpoint mode but only one at a time (mutually exclusive)
Once a controller is configured to operate in endpoint mode, you can reserve memory to be exposed through EP’s BAR to host and thereby letting host (in this case x86) be able to directly write to Xavier’s system memory. (OR) Xavier-EP’s internal DMA engine can be used to push/pull data to/from x86’s system memory (this is like how a typical PCIe endpoint works). You can go through Xavier TRM to learn more about it and also device tree documentation on how EP can be configured through device-tree entries.

hi,vidyas:
I’m just starting to use xavier, and I’m super confused. I have two xavier directly connected ,I want to use the PCIe directly communicate between two Xavier’s without going through switch.how can I do now? Do you have any documents? you said this will be supported in a future release of JetPack,it can’t support now?

well, if you want documentation support for this, I’m afraid you may have to wait till next release

@vidyas,I see,thank you.Can I connect between two Xaviers with 10G switches?I checked the official documentation, Xavier supports GE switches, I don’t know if it can support 10G switches.

Can I connect between two Xaviers with 10G switches?I checked the official documentation, Xavier supports GE switches, I don’t know if it can support 10G switches
As long as they are PCIe endpoints with Gen-3/4 support, we should be able to connect them to Xavier without any issues

@dusty_nv
Are there any plans to communicate with RDMA between TX2s or TX1 in the future?

Currently RDMA plan is there only for GPU, but it might get extended for two Tegras also, but we can’t confirm this right now.

I want to use the PCIe directly communicate between two Xavier’s without going through switch.
How long will we wait for the next release for documentation support about it?
Before the next release with documentation support, can we operate Xavier in end point mode by configure the driver?

When will the next jetpack be released to support pcie endpoint mode?

It will be soon, but I cannot tell you the exact time due to our support policy.

I am debuging xavier endpoint mode, now I have some proble like this: The problem about xavier pcie endpoint mode - Jetson AGX Xavier - NVIDIA Developer Forums

Can you give me some help? thanks

What Operating Systems are Supported for Xavier Endpoint? Can the host PC be 64 bit running RHEL, Windows 7/10, Fedora, or only Ubuntu 18.04 ?