Can't compile Gluoncv model on deeplens

0

In my ipynb I download a pre-trained model and push to S3

import numpy as np
import mxnet as mx
import gluoncv as gcv
import tarfile

net = gcv.model_zoo.get_model("ssd_512_resnet50_v1_voc", pretrained=True)
net.hybridize()
net(mx.nd.ones((1, 3, 512, 512)))
net.export("model")
tar = tarfile.open("ssd_512_resnet50_v1_voc.tar.gz", "w:gz")

for name in ["model-0000.params", "model-symbol.json"]:
    tar.add(name)
tar.close()

sess.upload_data(
    path="ssd_512_resnet50_v1_voc.tar.gz", bucket=bucket, key_prefix=pretrained_model_sub_folder
)

I can download the model ok with deeplens project. I see the updated .params and .json file in the /opt/awscam/artifact/ dir. But when I try and compile the model in my inference script on the Deeplens device:

error, model_path = mo.optimize('model', 512, 512, 'MXNet')

I get the following compile error

DEBUG:mo:DLDT command: python3 /opt/awscam/intel/deeplearning_deploymenttoolkit/deployment_tools/model_optimizer/mo_mxnet.py --data_type FP16 --reverse_input_channels  --input_shape [1,3,512,512] --input_model /opt/awscam/artifacts/model-0000.params --scale 1 --model_name model --output_dir /opt/awscam/artifacts
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/opt/awscam/artifacts/model-0000.params
	- Path for generated IR: 	/opt/awscam/artifacts
	- IR output name: 	model
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	[1,3,512,512]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	1.0
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	True
MXNet specific parameters:
	- Deploy-ready symbol file: 	None
	- Enable MXNet loader for models trained with MXNet version lower than 1.0.0: 	False
	- Prefix name for args.nd and argx.nd files: 	None
	- Pretrained model to be merged with the .nd files: 	None
	- Enable saving built parameters file from .nd files: 	False
Model Optimizer version: 	2019.1.0-341-gc9b66a2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3/dist-packages/mo.py", line 229, in optimize
    set_platform_param(mxnet_params, aux_inputs))
  File "/usr/lib/python3/dist-packages/mo.py", line 160, in run_optimizer
    std_err = re.sub(b', question #\d+', '', std_err)
  File "/usr/lib/python3.5/re.py", line 182, in sub
    return _compile(pattern, flags).sub(repl, string, count)
TypeError: sequence item 1: expected a bytes-like object, str found

I think there must be a mismatch in versions between mxnet, gluoncv, intel openvino compiler... but stuck on this issue. Any help appreciated.

preguntada hace 2 años307 visualizaciones
3 Respuestas
0

Dear Customer,

Thank you for using AWS DeepLens.

Looking at the error stack, it seems to be a syntax error in script mo.py with regards to re.sub(b', question #\d+', '', std_err). As the inference script that is been used for your example is not available, that makes it difficult to narrow the issue and additionally we would not be able to provide code support if the issue is related to the custom script build by user.

I'd recommend to overcome the syntax error and try again, if you still encounter error, I'd recommend you to reach out to AWS Support for further investigation of the issue along with all the details and logs as sharing logs is not recommended to share on this platform.

Open a support case with AWS using the link: https://console.aws.amazon.com/support/home?#/case/create

AWS
INGENIERO DE SOPORTE
respondido hace 2 años
0

My understanding is that all MXNet models need to be compiled to XML format to be run on the deeplens device under Intel openvino runtime environment.

All I am doing is calling the optimizer to do this process. This is not my script you can open any python terminal and type the following.

import mo
error, model_path = mo.optimize('model', 512, 512, 'MXNet')

If this is not the correct process please tell me what is.

respondido hace 2 años
0

How to create a lambda inference function

https://www.awsdeeplens.recipes/300_intermediate/330_guess_drawing/332_inference/

AWS DeepLens uses the Intel OpenVino model optimizer to optimize the ML model to run on DeepLens hardware. The following code optimizes a model to run locally:

error, model_path = mo.optimize(model_name, INPUT_WIDTH, INPUT_HEIGHT)

I get the error after calling this function

respondido hace 2 años

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas