Let's say I need to move 12 TB of data from one server to another in the fastest possible way - in other words...

Let's say I need to move 12 TB of data from one server to another in the fastest possible way - in other words, I should be able to get line speed from my slowest server.

>FTP: Slow as fuck unless you use the two proprietary clients that can segment transfers.
>SFTP: Like FTP, but slower
>Rsync: Also slow

There's got to be a FOSS package that's meant to move a lot of data fast, right?

Other urls found in this thread:

deluge-torrent.org/
asperasoft.com/software/transfer-servers/aspera-enterprise-server/
unix.stackexchange.com/questions/132797/how-to-dd-a-remote-disk-using-ssh-on-local-machine-and-save-to-a-local-disk)
mjanja.ch/2014/07/parallelizing-rsync/
support.cerberusftp.com/hc/en-us/articles/203333215-Why-is-SSH2-SFTP-so-much-slower-than-FTP-and-FTPS-
twitter.com/AnonBabble

Shoebox

Load the info into a van and drive it there.

appears to be a photo management app?

Just install the drive in the other server.

The other server is in another country.

Are you people seriously saying that there's no application, no protocol for high speed file transfer?

http

As long as you have a direct fiber link, then sure.

Welp. One more item on the list of things that freetards can't do in the real world.

You are limited by the speed of the connection through all its hops.
Reduce hops.
We have no fucking idea what your hops are like.
Provide a traceroute for us.

>SFTP: Like FTP, but slower

More like more secure regular FTP over the open internet is like cybersecurity suicide. Adjust your frame size scrub, it will go plenty fast

If freetards found a way to get around information theory, that would be pretty impressive.

It's not possible to quickly move data without a fast connection. Period. The software is irrelevant.

I know what this line is capable of because a certain (stupidly expensive) proprietary package can max it out.

>reduce hops

Yeah I'll get right on rewriting my ISP's routes, moron.

>the software is irrelevant

Really now, because I can transfer a gig at line speed with [other package], and not with s/ftp,rsync,etc.

Bittorrent you nub! Grab BTsync and be done with it!

The fastest possible way is sneakernet.
Other than that, SFTP is fast if it's properly configured.

ISP is not the only solution you fuck nut.

BTSync is closed and proprietary.

At least two of you are brain damaged if you think that software makes no difference in transfer speed.

Then how does one "properly configure" it? On a server with a gigabit upload and a client with 100M download, I should be able to pull a theoretical maximum of 12MB/sec.

Instead, it'll do maaaaybe 400KB/sec.

This is not a network problem, this is a software problem.

What is the read and write speed of the hard drives in your two servers?

I can promise you it's one of your bottlenecks

Transferring a gig quickly is easy because you can transfer it from ram to ram

You could have a 10gb/s internet connection, but if you are using hard drives your fastest speed is likely around 30MB/s

And that is assuming every connection between you and that server is running at super speed

Also you haven't given us any numbers, how fast is your ftp running? It should be plenty fast

Moving 12tb i suspect will take 8-20 days

deluge-torrent.org/

There you go. Free torrent software.

>hard drive a bottleneck for 100Mbit

Is their 100M download 100% dedicated to you?


Didn't think so

Why wouldn't something like BitTorrent work?

>I should be able to pull a theoretical maximum of 12MB/sec.
That is wrong. That's only true if you have a direct line, which you do not have. There is no way to do a fast transfer without a fast direct connection, no matter what software you use.

Both are running SSDs, so let's be conservative and say that each side can read and write at 200MB/s. (The drives themselves are capable of at least triple that).

So no, it's not a storage bottleneck.

FTP using any client will only transfer about 400KB/s. Maybe 2MB/s if I transfer multiple files at once, 3.5MB/s if I use one of the shitty proprietary clients that can segment transfers.

Again, I know what this machine is capable of on this network connection. There's another package out there that will get me 10MB/sec or thereabouts, but it's proprietary shitware and very, very expensive. I can't use it.

In other words, as I've said twice now, THIS IS NOT A HARDWARE LIMITATION, THIS IS A SOFTWARE LIMITATION.

>There is no way to do a fast transfer without a fast direct connection, no matter what software you use.
Funny that when I file transfer using torrents it uses my entire upload.
Face it, freetard file transfer software is shit.

But that's wrong, since I know it's possible with a commercial package I can't use here.

The reason I know this is because I had a license for the fucking thing which expired a while ago. THIS server, THIS client, on THIS internet connection, can move about 5-8MB/s, RELIABLY.

Meanwhile, S/FTP gets me barely 1MB/S.

Fucking read what I'm writing, here.

Just copy and paste it you idiot

And these shitty clients and proprietary package are?

host it on a webserver and wget it famalam

At least now I can say it without being accused of shillery.

asperasoft.com/software/transfer-servers/aspera-enterprise-server/

Costs about $14K/year for a gigabit limited server license. The client is free.

The FTP clients are things like CuteFTP, which while fast, and does segmented transfers (something the guys at FileZilla are autistic about not supporting), is a really shitty program.

FedEx Overnight

Well i guess you could make a kernel module that would not jump in to usermode after every disk I/O syscall , but that would only be 7% increase in speed since network and disk I/O take the most.
So you're kind of stuck with SFTP (if its a binary protocol, cant remember)

But it's not a software limitation unless you are too dumb to figure shit out

I've transferred files on ftp as high as 30Mb/s (my house's max download speed) from my office to my home, and I do this regularly

I use filezilla

What isp are you guys using? I know a few less than scrupulous ones limit ftp transfer speed

Additionally some isps read in the fine print "upload speed limited to 5Mb/s" even when you're paying for 100 down

Have you tried your "proprietary" solution on these two specific servers?

Also, if you are transferring many small files via ftp can you approach your max speeds?

segment your fucking data manually then, for fucks sake

Torrents can get that high because it is a distributed download

It is many little downloads, with minimal connections each

>Have you tried your "proprietary" solution on these two specific servers?

For the third goddamn time, YES. If I still had a non-expired license for that application, I could get between 5MB/s average, 10MB/s peak.

That's why this is pissing me off so much, because I already know what this system is capable of.

I think you just have no idea what you are doing OP go back to school kiddo

And there's the freetard answer. Well if you just manually carve your files up into chunks...

>S/FTP gets me barely 1MB/S.
If by S/FTP you mean sftp then there's something wrong with your box or connection.

Is it a CPU issue, packet loss issue or throttling issue?

> getting mad at the replies you requested on an anime discussion website

Learn to ftp better

You just need to get good

When will you Freecucks learn that free software is almost always inferior to proprietary software?

7zip has that option, it's not exactly carving, you just click a few times

Oh and don't answer "no because it works with my shit software" because we need to know which aspect of your broken system it works around.

It's a "TCP is shit, FTP is a shit protocol" issue.

Far as I understand how this aspera thing works, it blasts the files over UDP, using an SSH control channel to adjust TCP window sizes, request redelivery, and such on the fly.

Well it isn't CPU because there's very little load, there's nil packet loss on this line, and software can't work around bandwidth throttling.

So I don't know, smartass, you tell me.

> non free software gets $15,000,000 put into its development
50 engineers running around the clock for a year+ developing it
> similar free software gets about 80 hours of some random guy's time put into its development before he gets bored and moves on
>non free runs 8% faster

"When will you Freecucks learn that free software is almost always inferior to proprietary software?"

>there's this thing that'll do it automatically but nah just split your shit up by hand every time you want to move files

This is why Dropbox is rich and you're not.

> implying I'm not rich

pay for delivery w/ plane and install the HDD in the other server

how is the data structured?
if it's made up of tens of thousands of files and folders the transfer speed will be much slower than sending a single file

you can try to archive it before sending or you can try to transfer the whole partition (unix.stackexchange.com/questions/132797/how-to-dd-a-remote-disk-using-ssh-on-local-machine-and-save-to-a-local-disk)

none the less, the transfer speed will be limited by the quality of the route. even if both ends can do 100Mbps the route may be slower

>software can't work around bandwidth throttling.
It often can. Not all throttling is the same.

> I don't know
That's a fair answer. Why don't you try to debug what the bottleneck of sftp is? A state of D would suggest disk, R CPU and S network.

That doesn't mean that it's bottlenecked due to hardware, it could very well mean that the application is using its resources inefficently.

The fact that other software exhibits better performance tends to point to it not being hardware.

Sftp is unrelated to ftp.

TCP sucks but it does not limit you to 1MB/s on a fast, non-lossy connection

Torrent doesn't have this problem

>it could very well mean that the application is using its resources inefficently
If it uses its resources inefficiently then it'll block on hardware.

SSH as in sftp is a huge CPU hog for example, and will often limit you to 5MB/s on a medium spec system. In those cases you can get a huge speed boost from a different cipher.

If the CPU is not pegged, it makes zero difference though.

You have to know which resource it is that needs to be used differently.

This is because torrent packages up the data beforehand. It works on a block level, not a good level, and a block can contain thousands of files

>good
File. Duck you Android keyboard

torrent protocols break the data into pieces
you can have a torrent with 1 million files made up of only 1000 pieces
it doesn't help very much because the clients still have to write and read from 1 million files

If you have some sort of MPLS network you could leverage it would help, but otherwise youre pretty much screwed as far as speed is concerned

>it doesn't help very much
It helps a lot it your problem is network latency. Creating a thousand files in a second is doable, but doing a thousand double round trips like rsync and scp does is usually not

You're missing where it's faster than freetard ftp. He said he's on SSDs both sides. Why do you continually think this is a storage bottleneck for 100Mbit when he's already said he has no such problem on another client?
He'd need to manually segment to get any sort of network use out of freetard FTP. Torrent does this automatically.

Is that u Ed? I thought u stopped hackin

Rsync seems like the most logical solution. I can usually max out a gigabit connection during backups. What kind of files are we talking about? Thousands of small files or a few large ones?

Compress everything you fat, ugly, retarded moron.

Also you do realise that rsync calls upon ssh during remote backups, right? If you don't care about encryption (please don't do this) you could always just run the rsync daemon and I guarantee you'll max out your connection.

It fucking will! I use it to distribute files across 250+ clients and because it's a swarm it's fast as fuck! I can control the clients with a little scripting and I don't have to worry about intermittent connections because bittorrent can handle connections constantly starting and stopping. 1gb file takes roughly 2 hours and only gets faster once more clients have the whole payload. OPEN needs to fucking listen!

Perhaps I don't. Chances I'm being MITM'd here are tiny.

Some data:
$ rsync -avzr --progress -e ssh user@blackbox:/home/user/lvmeasure/20160101.dat .
receiving incremental file list
./
20160101.dat
9048388 18% 313.46kB/s 0:02:10

#Remote side
user 9932 1.8 0.0 21136 1928 ? Ss 17:47 0:00 rsync --server --sender -vlogDtprze.iLsf . /home/user/lvmeasure/


Same shitty speeds I've been seeing this whole time.

With the right server you can control block size however you want, I fear no block size!

Thanks for proving my point m8

OK, looks like its the ssh encryption overhead slowing you down. Start an instance of rsyncd and try with just -avrP to rsync://host:/share

Make one a usenet server and the other a Usenet client.

That's even slower.

rsync -avrP rsync://blackbox/files/lvmeasure/20160101.dat .
receiving incremental file list
./
20160101.dat
17135272 34% 214.95kB/s 0:02:32

#Remote side
root 29677 0.0 0.0 11100 504 ? Ss 18:06 0:00 rsync --daemon


(for comparison, these files are 50 megabytes apiece, and already compressed by the program that generated them)

That's definitely not how it's support to go. There's something strange going on with your connection. What do you get when you run an iperf benchmark between the hosts.

netcat

Legit try this. Or setup NFS

Euuugh.

[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 3.27 MBytes 2.74 Mbits/sec


Something's definitely fucky, because even an http download can do better than that.

Tracing route to blackbox [lol no]
over a maximum of 30 hops:

1

>cogent

Rsync is good and can easily saturate 10gbe but it is not designed for many Tb of data or millions of files. But there is a workaround.
mjanja.ch/2014/07/parallelizing-rsync/

Literally just make a torrent

It will saturate your connection if you configure it correctly

>Maybe 2MB/s if I transfer multiple files at once
look at the tcp protocol congestion algorithm
probably sftp is using a single tcp connection, do you have other clients on the network?

setup an HTTP server and use a parallel downloader application like aria2c

>SFTP is fast if it's properly configured.

Could you please spoonfeed me a little bit on that or post a link to a tutorial? I also have openssh installed on my debian server and backup my files onto it using SFTP. I only get like 2MBps on average despite the hardware being capable of more.

support.cerberusftp.com/hc/en-us/articles/203333215-Why-is-SSH2-SFTP-so-much-slower-than-FTP-and-FTPS-

So I have done a little research and the HPN-SSH patches should make it faster, right?