- Más nuevo
- Más votos
- Más comentarios
Hi macleonsh,
This seems like an issue with MPD getting a SIGTERM when trying to load the default AFI.
Could you specify the region you are launching this instance in?
In the meanwhile I'll see if Xilinx can help out with the error message.
The vitis_runtime setup should not matter, you can add your version to the supported list and move forward with your testing:
echo >> $AWS_FPGA_REPO_DIR/Vitis/vitis_xrt_version.txt
echo "2019.2:0be8f75ca7e8a676ae5d385f453636c11567d584" >> $AWS_FPGA_REPO_DIR/Vitis/vitis_xrt_version.txt
source $AWS_FPGA_REPO_DIR/vitis_setup.sh #Should complete without error
-Deep
Hi macleonsh,
Are you running this instance in a China region by any chance?
-Deep
Yes I am using f1.4 in China region.
OK you remind me that Brain Xu from Xilinx previously helped to identify the similar issue, that within older XRT code for AWS deployment a default AFI need to be loaded but if this default AFI is not accessible then some hanging happen.
This can be overcome by manually load whatever afi, so I did the similar thing--what I did:
- sudo systemctl stop mpd
- sudo fpga-load-local-image -S 0 -I agfi-0fcf87119b8e97bf3
- sudo fpga-load-local-image -S 1 -I agfi-0fcf87119b8e97bf3
- sudo systemctl start mpd
Now it looks OK that I can do xbutil scan to find two fpga cards.
[0] 0000:00:1d.0 xilinx_aws-vu9p-f1_dynamic_5_0(ID=0xabcd) user(inst=129)
[1] 0000:00:1b.0 xilinx_aws-vu9p-f1_dynamic_5_0(ID=0xabcd) user(inst=128)
Why you request a default afi must be loaded --now this looks in part of xrt? in that case the default afi must be accessible to all regions.
BTW, can you help to check my initial two questions:
i.e. how can I run application to using the specific fpga slot?
Hi Macleonsh,
Deep asks me to help answer your question.
A1, if you just run one application, the application has access to both FPGAs, there is openCL API that returns IDs of both FPGA, then it is up to the application itself to determine which FPGA a kernel is going to run on.
A2, if you have 2 applications, you can run each of them in a container (or a pod managed by kubernetes, similar thing), and assign one FPGA to one container.
If you have more applications, you still only have 2 of them having FPGA access, since so far we don't have FPGA hardware level virtualization support. You can make the app with FPGA access run as server, all other apps which want to have acceleration run as client and the request from the client can be proxied to the server.
Here I have an example how to do that on AWS F1
https://github.com/xuhz/k8s-xilinx-fpga-device-plugin/blob/master/aws-readme.md
I create a one node kubenetes cluster on one AWS F1 node, and launch a server pod as a service which has FPGA access, I also launch a client pod which doesn't have FPGA access. whenever the client sends a request to the server, the service pod run a helloworld on FPGA and sends back the output as response.
You may not try that example on your F1 since I believe in China the afi used in the server pod docker image is not accessible.
Contenido relevante
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace un año