Skip to main content

I have some big files to upload to Drive/Sharepoint via Graph. 

How do I model this scenario in Thinkwise?

Steps to Upload a Large File Using Resumable Upload Session

  1. Create an Upload Session Use the /drive/items/{parent-id}:/{filename}:/createUploadSession endpoint to initiate the upload process.

    Request:

    POST https://graph.microsoft.com/v1.0/me/drive/items/{parent-id}:/{filename}:/createUploadSession Authorization: Bearer <access_token> Content-Type: application/json

    Body (Optional):

    { "item": { "@microsoft.graph.conflictBehavior": "rename", "name": "your-backup-file.bak" } }

    Response: The response contains an uploadUrl that you use to upload the file in chunks.

    Example Response:

    { "uploadUrl": "https://upload.url.for.your.session", "expirationDateTime": "2024-11-16T23:59:00Z" }

  1. Upload the File in Chunks Divide your file into chunks and upload each chunk using a PUT request to the uploadUrl.

    Chunk Size:

    • Recommended size: 10 MB (10,485,760 bytes).
    • Maximum size: 60 MB (62,914,560 bytes).

    Request for Each Chunk:

    PUT {uploadUrl} Content-Range: bytes {start}-{end}/{total-size} Content-Type: application/octet-stream

    Body: The binary data for the chunk.

    Example Content-Range:

    • For the first 10 MB chunk of a 580 MB file:

      Content-Range: bytes 0-10485759/580000000

    Continue until all chunks are uploaded.

  1. Complete the Upload Once the final chunk is uploaded, the file is automatically saved in the specified location.

    If successful, the API responds with the metadata of the uploaded file, such as:

    { "id": "unique-file-id", "name": "your-backup-file.bak", "size": 580000000, "webUrl": "https://graph.microsoft.com/v1.0/me/drive/items/unique-file-id" }

Hi Freddy,

This will require quite a complex process flow which performs the slicing of the file to upload and calls an http connector or web connector endpoint multiple times. The entire upload will have to be orchestrated in the process flow.

Depending on your hosting scenario, I’d say it may be easier for now to use an script which uses an SDK for uploading, and deploy this as an Azure Function which triggers on an Azure blob container or something.

The proper solution would be implementing your idea:

 


Thanks ​@Anne Buit for the reply. I do surely hope for out of the box support with especially graph. 

That said, I do have a 'workaround' via a local S3 block storage that syncs with or copies to OneDrive.

However this isn't feasible either at the moment, as the platform only support S3 from AWS. I don't know how difficult it will be to make the S3 AWS file connector more generic to model the base url and bucket.  If that would be to be supported I can use the file connector S3 to push files to a S3 block storage and from there I have the liberty to push to other locations (graph in this case) for safekeeping. 

 

CC ​@Arie V as also mentioned in the TCP ticket.  


Reply