By using AWS re:Post, you agree to the Terms of Use

Questions tagged with AWS DeepLens

Sort by most recent
  • 1
  • 12 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Can't compile Gluoncv model on deeplens

In my ipynb I download a pre-trained model and push to S3 ``` import numpy as np import mxnet as mx import gluoncv as gcv import tarfile net = gcv.model_zoo.get_model("ssd_512_resnet50_v1_voc", pretrained=True) net.hybridize() net(mx.nd.ones((1, 3, 512, 512))) net.export("model") tar = tarfile.open("ssd_512_resnet50_v1_voc.tar.gz", "w:gz") for name in ["model-0000.params", "model-symbol.json"]: tar.add(name) tar.close() sess.upload_data( path="ssd_512_resnet50_v1_voc.tar.gz", bucket=bucket, key_prefix=pretrained_model_sub_folder ) ``` I can download the model ok with deeplens project. I see the updated .params and .json file in the /opt/awscam/artifact/ dir. But when I try and compile the model in my inference script on the Deeplens device: ``` error, model_path = mo.optimize('model', 512, 512, 'MXNet') ``` I get the following compile error ``` DEBUG:mo:DLDT command: python3 /opt/awscam/intel/deeplearning_deploymenttoolkit/deployment_tools/model_optimizer/mo_mxnet.py --data_type FP16 --reverse_input_channels --input_shape [1,3,512,512] --input_model /opt/awscam/artifacts/model-0000.params --scale 1 --model_name model --output_dir /opt/awscam/artifacts Model Optimizer arguments: Common parameters: - Path to the Input Model: /opt/awscam/artifacts/model-0000.params - Path for generated IR: /opt/awscam/artifacts - IR output name: model - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,3,512,512] - Mean values: Not specified - Scale values: Not specified - Scale factor: 1.0 - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: True MXNet specific parameters: - Deploy-ready symbol file: None - Enable MXNet loader for models trained with MXNet version lower than 1.0.0: False - Prefix name for args.nd and argx.nd files: None - Pretrained model to be merged with the .nd files: None - Enable saving built parameters file from .nd files: False Model Optimizer version: 2019.1.0-341-gc9b66a2 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3/dist-packages/mo.py", line 229, in optimize set_platform_param(mxnet_params, aux_inputs)) File "/usr/lib/python3/dist-packages/mo.py", line 160, in run_optimizer std_err = re.sub(b', question #\d+', '', std_err) File "/usr/lib/python3.5/re.py", line 182, in sub return _compile(pattern, flags).sub(repl, string, count) TypeError: sequence item 1: expected a bytes-like object, str found ``` I think there must be a mismatch in versions between mxnet, gluoncv, intel openvino compiler... but stuck on this issue. Any help appreciated.
3
answers
0
votes
45
views
asked 2 months ago
  • 1
  • 12 / page