Skip to main content
Solved

Web/HTTP connector upload file in chunks

  • November 16, 2024
  • 2 replies
  • 59 views

Freddy
Forum|alt.badge.img+16
  • Thinkwise Local Partner Brasil

I have some big files to upload to Drive/Sharepoint via Graph. 

How do I model this scenario in Thinkwise?

Steps to Upload a Large File Using Resumable Upload Session

  1. Create an Upload Session Use the /drive/items/{parent-id}:/{filename}:/createUploadSession endpoint to initiate the upload process.

    Request:

    POST https://graph.microsoft.com/v1.0/me/drive/items/{parent-id}:/{filename}:/createUploadSession Authorization: Bearer <access_token> Content-Type: application/json

    Body (Optional):

    { "item": { "@microsoft.graph.conflictBehavior": "rename", "name": "your-backup-file.bak" } }

    Response: The response contains an uploadUrl that you use to upload the file in chunks.

    Example Response:

    { "uploadUrl": "https://upload.url.for.your.session", "expirationDateTime": "2024-11-16T23:59:00Z" }

  1. Upload the File in Chunks Divide your file into chunks and upload each chunk using a PUT request to the uploadUrl.

    Chunk Size:

    • Recommended size: 10 MB (10,485,760 bytes).
    • Maximum size: 60 MB (62,914,560 bytes).

    Request for Each Chunk:

    PUT {uploadUrl} Content-Range: bytes {start}-{end}/{total-size} Content-Type: application/octet-stream

    Body: The binary data for the chunk.

    Example Content-Range:

    • For the first 10 MB chunk of a 580 MB file:

      Content-Range: bytes 0-10485759/580000000

    Continue until all chunks are uploaded.

  1. Complete the Upload Once the final chunk is uploaded, the file is automatically saved in the specified location.

    If successful, the API responds with the metadata of the uploaded file, such as:

    { "id": "unique-file-id", "name": "your-backup-file.bak", "size": 580000000, "webUrl": "https://graph.microsoft.com/v1.0/me/drive/items/unique-file-id" }

Best answer by Anne Buit

Hi Freddy,

This will require quite a complex process flow which performs the slicing of the file to upload and calls an http connector or web connector endpoint multiple times. The entire upload will have to be orchestrated in the process flow.

Depending on your hosting scenario, I’d say it may be easier for now to use an script which uses an SDK for uploading, and deploy this as an Azure Function which triggers on an Azure blob container or something.

The proper solution would be implementing your idea:

https://community.thinkwisesoftware.com/ideas/support-for-more-file-storage-providers-like-onedrive-and-google-drive-3331

 

View original
This topic has been closed for comments

Anne Buit
Community Manager
Forum|alt.badge.img+5
  • Community Manager
  • November 26, 2024

Hi Freddy,

This will require quite a complex process flow which performs the slicing of the file to upload and calls an http connector or web connector endpoint multiple times. The entire upload will have to be orchestrated in the process flow.

Depending on your hosting scenario, I’d say it may be easier for now to use an script which uses an SDK for uploading, and deploy this as an Azure Function which triggers on an Azure blob container or something.

The proper solution would be implementing your idea:

https://community.thinkwisesoftware.com/ideas/support-for-more-file-storage-providers-like-onedrive-and-google-drive-3331

 


Freddy
Forum|alt.badge.img+16
  • Thinkwise Local Partner Brasil
  • November 26, 2024

Thanks ​@Anne Buit for the reply. I do surely hope for out of the box support with especially graph. 

That said, I do have a 'workaround' via a local S3 block storage that syncs with or copies to OneDrive.

However this isn't feasible either at the moment, as the platform only support S3 from AWS. I don't know how difficult it will be to make the S3 AWS file connector more generic to model the base url and bucket.  If that would be to be supported I can use the file connector S3 to push files to a S3 block storage and from there I have the liberty to push to other locations (graph in this case) for safekeeping. 

 

CC ​@Arie V as also mentioned in the TCP ticket.  


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings