I'm creating a pipeline that scores images using a custom pytorch unet. Assume I registered my model and now I'm creating that pipeline. I'm running into a OOM issue due to Azure setting the docker shared memory to 2g.I attempted to set the resources in my component yml as:
schema: https://azuremlschemas.azureedge.net/latest/commandComponent.schema.json
name: my_component
display_name: Create segmentation maps
type: command
description: |-
**U NNET**:
inputs:
nnetmodel:
type: custom_model
prepared_data:
type: uri_folder
outputs:
score_results:
type: uri_folder
score_summary:
type: uri_file
code: src
environment: azureml:mynet-env@latest
distribution:
type: pytorch
process_count_per_instance: 1
resources:
shm_size: 20g
instance_count: 1
docker_args: --ipc=host --shm-size 20g
command: >-
Load component all looks good.
score_data = load_component(source="my_component.yml")
print(score_data)
But when I create/update the component the component in my main code it removes all the resources BUT the instance_count.
ml_client.components.create_or_update(score_data)
How can I change the docker configurations on Azure ML SDK V2 for pipeline components?
I expected ml_client.components.create_or_update(score_data) to keep the properties of the component I created.