wanted behaviour on chunking big files and upload them

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

wanted behaviour on chunking big files and upload them

Daimonion
Hello First Post here in this mailing list.

My brother in law setup a debian server with owncloud 8.2.4 and we ar eboth using it as a backup system.

I'm using the OwnCloud Client 2.1.1 Build 5837 and i'm syncing files via internet to this ownCloud instance.

At the moment i'm upload a bunch of really big Files (4,3/7,8 GB (600GB in total) ) as i a want to store my system backup automatically outside my house.

So long everything is good with the setup. The Owncloud runs very stable and i synced many files (little ones (kb) and also big ones (1-2GB)) before the really big files (4,3/7,8GB) without problem.

But with this files i ran into a problem. "24h forced disconnect of dsl line". Every night my dsl line gets disconnect (Deutsche Telekom) and disrupts the upload. No Problem for Owncloud. It logs a failure "write not possible" and as soon as the connection is online again it resumes upload. Let you explain an example.

The Big Files are splitted in 1600 chunks. So let's say i start an upload from a new file in the evening. Then maybe the first 1000 chunks (3 at the same time) will be uploaded until the forced disconnect disrupts the internet connection. The client fails on chunk 1000,1001,1002
After reestablishing the connection the client resumes the upload with chunk 1000,1001,1002. Fine. But if it reaches the last chunk 1599 it will start over with chunk 0 and it gets more worse. There are 3 parallel uploads, and the first time the file was uploaded all 3 upload sockets were used by the same file. Now as it started over from chunk 0 it uses only 1 upload socket. the 2 remaining sockets will be used by other files (even the same big size (7,8GB)).

Now as there remains just 1 upload socket it will take ages to upload the remain chunks. And there is no detail how much chunks have to be uploaded......


What is the intended behaviour on forced interrupt while chunked uploading and is there anything we can configure that a forced disconnect doesn't not ends in eternally upload the same chunks again and again?

Is there a possible way (server or client side) to look which chunks are uploaded correct and which chunks have to be uploaded to complete the file on the server?


Thanks in advance

Regards
Daimonion
Reply | Threaded
Open this post in threaded view
|

Re: wanted behaviour on chunking big files and upload them

Roeland Douma
Hi Daimonion,

This seems like a bug.
Could you retry with the freshly launched 2.2 client? We tweaked some
things in the chuncked upload there so maybe the problem is fixed.

Other than that I want to stress that ownCloud is not a backup solution.
The sync client syncs. So if the file on your computer is removed (and
the sync client is running) it will delete the file on the server as well (and
vise versa). So please use additional backup software to backup the data on
your server.

Cheers,
--Roeland

From: Daimonion <[hidden email]>
To: <[hidden email]>
Sent: 12-5-2016 18:22
Subject: [owncloud-devel] wanted behaviour on chunking big files and upload them

Hello First Post here in this mailing list.

My brother in law setup a debian server with owncloud 8.2.4 and we ar eboth
using it as a backup system.

I'm using the OwnCloud Client 2.1.1 Build 5837 and i'm syncing files via
internet to this ownCloud instance.

At the moment i'm upload a bunch of really big Files (4,3/7,8 GB (600GB in
total) ) as i a want to store my system backup automatically outside my
house.

So long everything is good with the setup. The Owncloud runs very stable and
i synced many files (little ones (kb) and also big ones (1-2GB)) before the
really big files (4,3/7,8GB) without problem.

But with this files i ran into a problem. "24h forced disconnect of dsl
line". Every night my dsl line gets disconnect (Deutsche Telekom) and
disrupts the upload. No Problem for Owncloud. It logs a failure "write not
possible" and as soon as the connection is online again it resumes upload.
Let you explain an example.

The Big Files are splitted in 1600 chunks. So let's say i start an upload
from a new file in the evening. Then maybe the first 1000 chunks (3 at the
same time) will be uploaded until the forced disconnect disrupts the
internet connection. The client fails on chunk 1000,1001,1002
After reestablishing the connection the client resumes the upload with chunk
1000,1001,1002. Fine. But if it reaches the last chunk 1599 it will start
over with chunk 0 and it gets more worse. There are 3 parallel uploads, and
the first time the file was uploaded all 3 upload sockets were used by the
same file. Now as it started over from chunk 0 it uses only 1 upload socket.
the 2 remaining sockets will be used by other files (even the same big size
(7,8GB)).

Now as there remains just 1 upload socket it will take ages to upload the
remain chunks. And there is no detail how much chunks have to be
uploaded......


What is the intended behaviour on forced interrupt while chunked uploading
and is there anything we can configure that a forced disconnect doesn't not
ends in eternally upload the same chunks again and again?

Is there a possible way (server or client side) to look which chunks are
uploaded correct and which chunks have to be uploaded to complete the file
on the server?


Thanks in advance

Regards
Daimonion



--
View this message in context: http://owncloud.10557.n7.nabble.com/wanted-behaviour-on-chunking-big-files-and-upload-them-tp17249.html
Sent from the Developers mailing list archive at Nabble.com.
_______________________________________________
Devel mailing list
[hidden email]
http://mailman.owncloud.org/mailman/listinfo/devel

_______________________________________________
Devel mailing list
[hidden email]
http://mailman.owncloud.org/mailman/listinfo/devel
Reply | Threaded
Open this post in threaded view
|

Re: wanted behaviour on chunking big files and upload them

Daimonion
Thanks for your answer

I will try the new client directly after uploading the file i already tried 3 times in a row. ;)

Also i'm aware now of the discussion for a new chunking algorithm in 9.x here for the server:

https://github.com/owncloud/core/pull/20118

and here for the client:

https://github.com/owncloud/client/issues/4019

If all this stuff is announced officially my brother in law will hopefully update to 9.x server soon.

And yes i know owncloud is not a real backup solution. The files are true image container which are also mirrored to my personal nas and are stored at another hard disk in my computer. The owncloud copy is just the 3rd automatic backup. Just in case of fire.

I will report if the client version 2.2.0 works better than 2.1.1

Regards
Daimonion
Reply | Threaded
Open this post in threaded view
|

Re: wanted behaviour on chunking big files and upload them

Daimonion
So, first report

V2.2.0 works way better. Meanwhile the server runs under 8.2.5 and upload bandwith is shown correctly.

i have got 3 exceptions a day where the server responds a bad request:

"<?xml version="1.0" encoding="utf-8"?>
<d:error xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns">
  <s:exception>Sabre\DAV\Exception\BadRequest</s:exception>
  <s:message>expected filesize 10000000 got 0</s:message>
</d:error>
"

but i think that is connection related and not client related.

Thanks so far for your help
Reply | Threaded
Open this post in threaded view
|

Re: wanted behaviour on chunking big files and upload them

Daimonion
Hey

Damn i have to revert my statement. Also with 2.2.0 chunk were uploaded a second time.
It just took ages that the client shows the behaviour cause my upload is very limited and the files were really big.

So, one file has 839 Chunks (10M Chunksize) and the upload includes 4-7 files.
Thats another point 2.1.1 just uploaded 3 files at the same time.

When the upload of this files is done i will test the client with 1-2 files and try to reproduce the behavior.

Regards
Daimonion
Reply | Threaded
Open this post in threaded view
|

Re: wanted behaviour on chunking big files and upload them

Daimonion
Hello

One information more regarding the doubled upload of chunks

After the filechunks starts over and begin uploading a second or a third time sometimes it is ready with uploading all chunks in the middle of the chunk list:

System_full_b2_s1_v6.tib-chunking-1472021408-839-596" )  FINISHED WITH STATUS 0 "" QVariant(int, 201) QVariant(QString, "Created")
05-18 07:17:03:888 0x5008de0 OCC::SyncJournalErrorBlacklistRecord::update: blacklisting  "Backup/Moiner/System_full_b2_s1_v6.tib"  for  25 , retry count  1
05-18 07:17:03:890 0x5008de0 OCC::SyncJournalDb::updateErrorBlacklistEntry: set blacklist entry for  "Backup/Moiner/System_full_b2_s1_v6.tib" 1 "Der Server hat den letzten Block nicht bestätigt. (Der E-Tag war nicht vorhanden)" 1463548623 25 1462103042 ""
05-18 07:17:03:893 0x5008de0 OCC::SyncEngine::slotItemCompleted: void OCC::SyncEngine::slotItemCompleted(const OCC::SyncFileItem&, const OCC::PropagatorJob&) "Backup/Moiner/System_full_b2_s1_v6.tib" INSTRUCTION_NEW 2 "Der Server hat den letzten Block nicht bestätigt. (Der E-Tag war nicht vorhanden)"
05-18 07:17:03:910 0x5008de0 OCC::SocketApi::sendMessage: SocketApi:  Sending message:  "STATUS:ERROR:E:\OwnCloud Andreas\Backup\Moiner\System_full_b2_s1_v6.tib"


In this log example it was chunk 596. But then the server replies that the etag was not appended.

Is it correct that the etag is appended only to the last chunk?


Regards Daimonion