Let's say I need to move 12 TB of data from one server to another in the fastest possible way - in other words, I should be able to get line speed from my slowest server.
>FTP: Slow as fuck unless you use the two proprietary clients that can segment transfers. >SFTP: Like FTP, but slower >Rsync: Also slow
There's got to be a FOSS package that's meant to move a lot of data fast, right?
Are you people seriously saying that there's no application, no protocol for high speed file transfer?
Leo Rivera
http
Levi Long
As long as you have a direct fiber link, then sure.
Camden Rogers
Welp. One more item on the list of things that freetards can't do in the real world.
Joseph White
You are limited by the speed of the connection through all its hops. Reduce hops. We have no fucking idea what your hops are like. Provide a traceroute for us.
Nathaniel Morgan
>SFTP: Like FTP, but slower
More like more secure regular FTP over the open internet is like cybersecurity suicide. Adjust your frame size scrub, it will go plenty fast
Sebastian Morales
If freetards found a way to get around information theory, that would be pretty impressive.
It's not possible to quickly move data without a fast connection. Period. The software is irrelevant.
Bentley Walker
I know what this line is capable of because a certain (stupidly expensive) proprietary package can max it out.
>reduce hops
Yeah I'll get right on rewriting my ISP's routes, moron.
Logan Wood
>the software is irrelevant
Really now, because I can transfer a gig at line speed with [other package], and not with s/ftp,rsync,etc.
Kevin Gonzalez
Bittorrent you nub! Grab BTsync and be done with it!
Gavin Harris
The fastest possible way is sneakernet. Other than that, SFTP is fast if it's properly configured.
Nicholas Rivera
ISP is not the only solution you fuck nut.
Jacob Flores
BTSync is closed and proprietary.
At least two of you are brain damaged if you think that software makes no difference in transfer speed.
Thomas Robinson
Then how does one "properly configure" it? On a server with a gigabit upload and a client with 100M download, I should be able to pull a theoretical maximum of 12MB/sec.
Instead, it'll do maaaaybe 400KB/sec.
This is not a network problem, this is a software problem.
Joseph Johnson
What is the read and write speed of the hard drives in your two servers?
I can promise you it's one of your bottlenecks
Transferring a gig quickly is easy because you can transfer it from ram to ram
You could have a 10gb/s internet connection, but if you are using hard drives your fastest speed is likely around 30MB/s
And that is assuming every connection between you and that server is running at super speed
Also you haven't given us any numbers, how fast is your ftp running? It should be plenty fast
>I should be able to pull a theoretical maximum of 12MB/sec. That is wrong. That's only true if you have a direct line, which you do not have. There is no way to do a fast transfer without a fast direct connection, no matter what software you use.
Samuel Wilson
Both are running SSDs, so let's be conservative and say that each side can read and write at 200MB/s. (The drives themselves are capable of at least triple that).
So no, it's not a storage bottleneck.
FTP using any client will only transfer about 400KB/s. Maybe 2MB/s if I transfer multiple files at once, 3.5MB/s if I use one of the shitty proprietary clients that can segment transfers.
Again, I know what this machine is capable of on this network connection. There's another package out there that will get me 10MB/sec or thereabouts, but it's proprietary shitware and very, very expensive. I can't use it.
In other words, as I've said twice now, THIS IS NOT A HARDWARE LIMITATION, THIS IS A SOFTWARE LIMITATION.
Gabriel Bennett
>There is no way to do a fast transfer without a fast direct connection, no matter what software you use. Funny that when I file transfer using torrents it uses my entire upload. Face it, freetard file transfer software is shit.
Elijah Morris
But that's wrong, since I know it's possible with a commercial package I can't use here.
The reason I know this is because I had a license for the fucking thing which expired a while ago. THIS server, THIS client, on THIS internet connection, can move about 5-8MB/s, RELIABLY.
Meanwhile, S/FTP gets me barely 1MB/S.
Fucking read what I'm writing, here.
Blake Morales
Just copy and paste it you idiot
Jaxson Young
And these shitty clients and proprietary package are?
Zachary Torres
host it on a webserver and wget it famalam
Elijah Lewis
At least now I can say it without being accused of shillery.
Costs about $14K/year for a gigabit limited server license. The client is free.
The FTP clients are things like CuteFTP, which while fast, and does segmented transfers (something the guys at FileZilla are autistic about not supporting), is a really shitty program.
Gabriel Jones
FedEx Overnight
Oliver White
Well i guess you could make a kernel module that would not jump in to usermode after every disk I/O syscall , but that would only be 7% increase in speed since network and disk I/O take the most. So you're kind of stuck with SFTP (if its a binary protocol, cant remember)
Zachary Wilson
But it's not a software limitation unless you are too dumb to figure shit out
I've transferred files on ftp as high as 30Mb/s (my house's max download speed) from my office to my home, and I do this regularly
I use filezilla
What isp are you guys using? I know a few less than scrupulous ones limit ftp transfer speed
Additionally some isps read in the fine print "upload speed limited to 5Mb/s" even when you're paying for 100 down
Have you tried your "proprietary" solution on these two specific servers?
Also, if you are transferring many small files via ftp can you approach your max speeds?
Zachary James
segment your fucking data manually then, for fucks sake
Jace Rogers
Torrents can get that high because it is a distributed download
It is many little downloads, with minimal connections each
Josiah Garcia
>Have you tried your "proprietary" solution on these two specific servers?
For the third goddamn time, YES. If I still had a non-expired license for that application, I could get between 5MB/s average, 10MB/s peak.
That's why this is pissing me off so much, because I already know what this system is capable of.
Levi Young
I think you just have no idea what you are doing OP go back to school kiddo
Jose Richardson
And there's the freetard answer. Well if you just manually carve your files up into chunks...
Zachary Turner
>S/FTP gets me barely 1MB/S. If by S/FTP you mean sftp then there's something wrong with your box or connection.
Is it a CPU issue, packet loss issue or throttling issue?
Logan Gomez
> getting mad at the replies you requested on an anime discussion website
Learn to ftp better
You just need to get good
Grayson James
When will you Freecucks learn that free software is almost always inferior to proprietary software?
Mason Stewart
7zip has that option, it's not exactly carving, you just click a few times
Hunter Torres
Oh and don't answer "no because it works with my shit software" because we need to know which aspect of your broken system it works around.
Isaac Taylor
It's a "TCP is shit, FTP is a shit protocol" issue.
Far as I understand how this aspera thing works, it blasts the files over UDP, using an SSH control channel to adjust TCP window sizes, request redelivery, and such on the fly.
Caleb Garcia
Well it isn't CPU because there's very little load, there's nil packet loss on this line, and software can't work around bandwidth throttling.
So I don't know, smartass, you tell me.
Josiah Ross
> non free software gets $15,000,000 put into its development 50 engineers running around the clock for a year+ developing it > similar free software gets about 80 hours of some random guy's time put into its development before he gets bored and moves on >non free runs 8% faster
"When will you Freecucks learn that free software is almost always inferior to proprietary software?"
Henry Howard
>there's this thing that'll do it automatically but nah just split your shit up by hand every time you want to move files
This is why Dropbox is rich and you're not.
Nicholas Walker
> implying I'm not rich
Owen Price
pay for delivery w/ plane and install the HDD in the other server
Adam Evans
how is the data structured? if it's made up of tens of thousands of files and folders the transfer speed will be much slower than sending a single file
none the less, the transfer speed will be limited by the quality of the route. even if both ends can do 100Mbps the route may be slower
Angel Parker
>software can't work around bandwidth throttling. It often can. Not all throttling is the same.
> I don't know That's a fair answer. Why don't you try to debug what the bottleneck of sftp is? A state of D would suggest disk, R CPU and S network.
Luis Lewis
That doesn't mean that it's bottlenecked due to hardware, it could very well mean that the application is using its resources inefficently.
The fact that other software exhibits better performance tends to point to it not being hardware.
Joseph Taylor
Sftp is unrelated to ftp.
TCP sucks but it does not limit you to 1MB/s on a fast, non-lossy connection
Zachary Jenkins
Torrent doesn't have this problem
Anthony Butler
>it could very well mean that the application is using its resources inefficently If it uses its resources inefficiently then it'll block on hardware.
SSH as in sftp is a huge CPU hog for example, and will often limit you to 5MB/s on a medium spec system. In those cases you can get a huge speed boost from a different cipher.
If the CPU is not pegged, it makes zero difference though.
You have to know which resource it is that needs to be used differently.
Owen Lee
This is because torrent packages up the data beforehand. It works on a block level, not a good level, and a block can contain thousands of files
Colton Hughes
>good File. Duck you Android keyboard
John Bell
torrent protocols break the data into pieces you can have a torrent with 1 million files made up of only 1000 pieces it doesn't help very much because the clients still have to write and read from 1 million files
Alexander James
If you have some sort of MPLS network you could leverage it would help, but otherwise youre pretty much screwed as far as speed is concerned
Gabriel Thomas
>it doesn't help very much It helps a lot it your problem is network latency. Creating a thousand files in a second is doable, but doing a thousand double round trips like rsync and scp does is usually not
Julian Gutierrez
You're missing where it's faster than freetard ftp. He said he's on SSDs both sides. Why do you continually think this is a storage bottleneck for 100Mbit when he's already said he has no such problem on another client? He'd need to manually segment to get any sort of network use out of freetard FTP. Torrent does this automatically.
Adam Cruz
Is that u Ed? I thought u stopped hackin
Asher Robinson
Rsync seems like the most logical solution. I can usually max out a gigabit connection during backups. What kind of files are we talking about? Thousands of small files or a few large ones?
Aiden Evans
Compress everything you fat, ugly, retarded moron.
Juan Green
Also you do realise that rsync calls upon ssh during remote backups, right? If you don't care about encryption (please don't do this) you could always just run the rsync daemon and I guarantee you'll max out your connection.
Levi Watson
It fucking will! I use it to distribute files across 250+ clients and because it's a swarm it's fast as fuck! I can control the clients with a little scripting and I don't have to worry about intermittent connections because bittorrent can handle connections constantly starting and stopping. 1gb file takes roughly 2 hours and only gets faster once more clients have the whole payload. OPEN needs to fucking listen!
Caleb Lewis
Perhaps I don't. Chances I'm being MITM'd here are tiny.
Some data: $ rsync -avzr --progress -e ssh user@blackbox:/home/user/lvmeasure/20160101.dat . receiving incremental file list ./ 20160101.dat 9048388 18% 313.46kB/s 0:02:10
#Remote side user 9932 1.8 0.0 21136 1928 ? Ss 17:47 0:00 rsync --server --sender -vlogDtprze.iLsf . /home/user/lvmeasure/
Same shitty speeds I've been seeing this whole time.
Nathaniel Butler
With the right server you can control block size however you want, I fear no block size!
Cooper Martin
Thanks for proving my point m8
Aaron Evans
OK, looks like its the ssh encryption overhead slowing you down. Start an instance of rsyncd and try with just -avrP to rsync://host:/share
Adam White
Make one a usenet server and the other a Usenet client.
#Remote side root 29677 0.0 0.0 11100 504 ? Ss 18:06 0:00 rsync --daemon
(for comparison, these files are 50 megabytes apiece, and already compressed by the program that generated them)
Christopher Lee
That's definitely not how it's support to go. There's something strange going on with your connection. What do you get when you run an iperf benchmark between the hosts.
Something's definitely fucky, because even an http download can do better than that.
Jaxon Thomas
Tracing route to blackbox [lol no] over a maximum of 30 hops:
1
Nathan Howard
>cogent
Asher Allen
Rsync is good and can easily saturate 10gbe but it is not designed for many Tb of data or millions of files. But there is a workaround. mjanja.ch/2014/07/parallelizing-rsync/
Robert Diaz
Literally just make a torrent
It will saturate your connection if you configure it correctly
Gavin Cruz
>Maybe 2MB/s if I transfer multiple files at once look at the tcp protocol congestion algorithm probably sftp is using a single tcp connection, do you have other clients on the network?
Kevin Bennett
setup an HTTP server and use a parallel downloader application like aria2c
Samuel Flores
>SFTP is fast if it's properly configured.
Could you please spoonfeed me a little bit on that or post a link to a tutorial? I also have openssh installed on my debian server and backup my files onto it using SFTP. I only get like 2MBps on average despite the hardware being capable of more.