• 0

Fast copy software for internal network traffics


Question

Fast copy software for internal network traffics

Urgently required

Hello

Urgently for business ... looking for the most fastest software for data copy or data transfer in internal network between computers in the same network

I've goggled and found TeraCopy and RichCopy to be the most fastest tools around

When I test both of them I've found it is maximum 9:10 MB/second ... which is really slow for our needs

Please advise for a tool free or paid that could be as fast as possible?

Note: lan is about 20 node ... and daily data traffics in between is about 150:200 GB/day ... Yes we do work in that volume in a daily base ... so please advise for a suitable one?

Also all computers running windows 7 and XP

Link to comment
Share on other sites

Recommended Posts

  • 0

- Later Edit -

Shall I use another way to copy instead of the tera copy? I mean like FTP?

Thanks and sorry for being a such disturber :(

Link to comment
Share on other sites

  • 0

take a look at what they said to you before, they said that after the all this that your hard drives will become the limiting factor now. your hard drives are your bottle neck as they cant transfer that fast

Link to comment
Share on other sites

  • 0

JEBUS. Ok. There are programs such as crystaldiskmark, ATTO, etc that will bench mark your drives. IF you're working with video, you should probably have a raid array. RAID0, RAID 5, etc. what speed are your drives? 5400? 7200? You've gotta worry about both source and destination drives. if your source drive is a SSD and your target is an IDE 5400 RPM drive that writes at 40MB/sec, then thats the fastest you're gonna get.

One of my ex girlfriends used to work for a company that did video, something similar to what your company sounds like. They had massive arrays of drives that they used to handle it. You're not transfering documents, you're working with multi mega/giga/terrabytes worth of data. You need fast drives. Minimum 7200rpm in RAID arrays, or SSDS. Most SSD's are capable of 200+MB R/W speeds, so then the limiting factor goes back to the transfer medium, in this case gig ethernet. I've got a 8 drive RAID array that can do 400+MB/sec transfers, so when I move from my desktop's SSD to the file server, I cap out around 100-110MB/sec, which is 800Mbps...about as fast as I'm gonna get

Another thing that nobodies mentioned, what types of processors are you using? If you're using something crappy, it may not be able to handle the load either. My first file server was based on a dual core atom (i cared more about low power usage than performance) and I could only get around 40MB/sec transfer speeds across my network to the box.

Each of your boxes should have a RAID controller, hell probably the stock intel ICH raid software might do, if you have it in your boards. Grab 2 or more drives, make either a 0,1, 0+1, 1+0, or 5 array in EACH box, cable them with CAT-5 cable to gigabit switches. If you're copying from/to more than one machine at once, you may need multiple switches to help handle all the data moving around.

Another option. Once you get your disks out of being the bottleneck, you can look at dual or quad port gigabit cards. If you get a switch that supports it, you can link the ports together to form 2-4Gbit connections (Bonded ports on the cards/switches) This will allow you to raise your network speed for less than what 10GBe will cost.

PS: you might want to get "MB/s" and "mb/s" sorted out first, there's a huge difference. One is 8 times more than the other.

Link to comment
Share on other sites

  • 0

Thanks a lot for the information :)

this is the hard drive of the machine i am recording to:

UYv6x.png

This is the hard drive of the machine I am doing the encoding work:

tjS3c.png

So what is the read and write speed for both and is it ok or really slow?

Also regarding the ssd drives ... i found some models and various prices ... which one is suitable? please advise

Too many like here:

http://www.newegg.com/Store/SubCategory.aspx?SubCategory=636&name=Internal-SSD&Order=PRICE&Pagesize=100

What is the criteria to choose according to?

Really too much appreciated your time and your help.

I am totally understand that I am newbie and really asking stupid questions, and really do appreciate your patient all ... but honestly I learned a lot of things from all of you guys and it seems that it is so close to be done.

one last thing, regarding the ftp ... you mentioned that it does not matter to use normal copy method or copy software or ftp ... but for my own information, and i understand the limitation of the network and hard drive mentioned earlier ... but isn't ftp is designed essentially for a file transfer? I mean would not it help at all to setup ftp server in the recording box, and ftp client in the encoding box?

Thanks indeed and too much appreciated your patient and your time :)

Link to comment
Share on other sites

  • 0

Those screenshots don't show read/write speeds.

What test exactly I should run to show you?

Wonder what's up with you G drive though.

It is my external hard drive samsung 2 tb.

Link to comment
Share on other sites

  • 0

If you're wanting massive amounts of network speed there is only one way to get it and that's to shell out for fiber or 10Gbe either way it's going to cost you more than you've just outlayed on 1Gbe gear personally 1Gbe or less is for home networks or places that only deal with docs and web pages

also with PCI nics your going to be limited to that buses bandwidth ie: 33MBps so to go faster you need PCIe or PCIx bus devices to maximize throughput

Link to comment
Share on other sites

  • 0

to go faster you need PCIe or PCIx bus devices to maximize throughput

And it will be much faster?

Also how much does it cost in range?

Link to comment
Share on other sites

  • 0

After a lot of trials and errors I've found that using ftp is really increasing the transfer time compared to any other way.

I've tried two ftp client, filezilla and smartftp

and tried filezilla server

and result as you can see is really good and about 40:45 MB/secons:

HcPj9.png

KoCDU.png

I think we will keep it with ftp as getting any extra ssd drives equal a fortune for our budget now.

Thanks a lot for all of you and for all the help and information and ideas.

Really this place is a brilliant helpful indeed.

Thanks and sure you had enough from me so far lol hehehehehehe

If any other suggestions that would be much appreciated as well :)

Link to comment
Share on other sites

  • 0

Now it's clear you're not listening. You've been told that for faster disk speed, you need RAID or SSDs. For faster network transfer you need gig+ ethernet. And I already told you to get crystaldiskMARK...not info, or ATTO to benchmark your drives, and you didn't listen.

Also, what is xx:xxMB/sec? I know of no country that uses ":" for comma's or periods.

Run this: http://crystalmark.info/software/CrystalDiskMark/index-e.html

Link to comment
Share on other sites

  • 0

Now it's clear you're not listening.

Absolutely not and I am listening to all information has been told and keep trying several solutions.

You've been told that for faster disk speed, you need RAID or SSDs.

I've found them to be really too much expensive to get 20 SSD now, also I did sent a link earlier for what I've found and did asked for your advise for which one I should get as it is my 1st time to ever know about such hard drives.

For faster network transfer you need gig+ ethernet.

Yes I did followed your advise and other members as well, and did got the gigabit Ethernet cards and 16 instead of 8 port ones I've and tried to do the jumpo frame option mentioned earlier as well.

Also I did tried the ftp way to install a server on the recording machine and to use ftp client on the encoding machine to copy the files to.

And I already told you to get crystaldiskMARK...not info, or ATTO to benchmark your drives, and you didn't listen.

It seems that I get something wrong, as I did installed the application and run it and did sent a snapshot for what it says, and did asked if there is anything else I can do (that was in the post when I said about my external samsung hard drive 2 TB). ... but it seems that due to my poor information that I did not understand what you asked me to do.

Also, what is xx:xxMB/sec? I know of no country that uses ":" for comma's or periods.

I just copied the result from tera copy application for the start time and the end of the copy process then posted the net time (I did it using excel to be accurate).

I did my friend ... but again it seems that I am doing it the wrong way, please tell what shall I do? isn't it to be run and then take a snap shot or what exactly?

Thanks for your time and for your help.

Finally ... I did not ignore any of your advices or any other members around the forum as I understand and do appreciatet your time and your help, but please forgive me and accept my apologize if I mis understood anything as most of these stuffs are new to me.

Also regarding the SSD Drives ... where to get some cheap ones as I need about 20 or 25 drives.

Thanks a lot and much appreciated :)

Link to comment
Share on other sites

  • 0

you dont need SSD's, it was just a recommendation. Grab 4 drives and put in a RAID 0/5 array, that's enough for decent speeds. Say each drive R/W speed is 50 MBps, with 4 drives in RAID0, you could theoretically achieve speeds of 150-200MB/s. in RAID 5, it's N-1, so 100-150MB/sec. 4 1TB/2TB drives should be around 400-500$, the cost of a 300GB SSD. put a RAID array on each end of your transfer, and you'll easily get around 100MB/sec across your network. If you want faster, get equipment that supports channel bonding and you could get 2x1Gbit/sec, or around 200MB/sec across your network.

I dont know how else to explain it to you. If you want really fast transfer speeds it's going to cost you a lot of money, this isn't something buying a 20$ port expander or cat 6 cable is going to give you.

try this since you can't figure the other one out:http://www.hdtune.com/

Link to comment
Share on other sites

  • 0

Grab 4 drives and put in a RAID 0/5 array, that's enough for decent speeds. Say each drive R/W speed is 50 MBps, with 4 drives in RAID0, you could theoretically achieve speeds of 150-200MB/s. in RAID 5, it's N-1, so 100-150MB/sec. 4 1TB/2TB drives should be around 400-500$, the cost of a 300GB SSD. put a RAID array on each end of your transfer, and you'll easily get around 100MB/sec across your network. If you want faster, get equipment that supports channel bonding and you could get 2x1Gbit/sec, or around 200MB/sec across your network

I don't think that that is practial. He has a number of computers (say 10) that are recording things. If you were to RAID all of those systems, you'd need 20+ identical drives for RAID0, and 30-40+ for RAID5 for a marginal speed increase, of maybe 2x-3x of current speed, and that's IF the PCI bus isn't the slowest link in the chain.

And to whoever suggested 10gbit ethernet really isn't pay attention to the bottleneck here :p

At this point I'd think the best use of bandwidth if he's averaging 36MB/s is bump up to transfering two files from two seperate sources. Two sources at 36MB/s-40MB/s each should put the switch at utilizing 72MB/s-80MB/s which is around real world limits of gigabit. A third could be added in, but would slow the other two files down a bit to around 35MB/s (figuring 850mbit/s as real world limit of gigabit, divided by three sources, divided by 8 bits per byte = 35MB/s per file)

There are a lot of cooks in this topic.

Link to comment
Share on other sites

  • 0

I don't think that that is practial. He has a number of computers (say 10) that are recording things. If you were to RAID all of those systems, you'd need 20+ identical drives for RAID0, and 30-40+ for RAID5 for a marginal speed increase, of maybe 2x-3x of current speed, and that's IF the PCI bus isn't the slowest link in the chain.

Why would he need 20-40 Identical drives? you don't need the same drives across all your systems for RAID, only on the same system, and it's hardly uncommon knowledge that massing 2+ drives together increases speed. I get 90-100MB/s speeds on my network when copying from an SSD to an 8 drive RAID array, while that's overkill for this guy, I'm sure if he grabbed 2x 1TB drives and a cheap RAID-0 controller, he'd eliminate the drives from being the bottleneck.

And to whoever suggested 10gbit ethernet really isn't pay attention to the bottleneck here :p

I suggested it at first, because in the beginning the talk was about network speed. I didn't say he should go get 10GBE, I only recommended it for further speed increases, all other things not being the bottleneck of course. Gig-E gets relatively slow when you're transfering TB's worth of data. I once backed up my file server to another box over gigabit ethernet, all 10TB's worth of data. Even with RAID arrays in both boxes, and gig ethernet, it still took hours. Once this guy figures out his storage issues, if he still needs more speed than 10GBE or port bonding is the next step to go faster.

Link to comment
Share on other sites

  • 0

@SirEvan

Thanks a lot for clearing the information ... These options will be overkilling and go really beyond our budget ... but for sure will be mandatory for the future, but for the time being we can not pay all that money for hardware ... but you cleared manything I was not aware about regarding the hard drives speed.

@cybertimber2008

The speed for the FTP transfer I posted earlier here , and here ... was not on pararlle but it was one file in a time ... and it happened only with the d-link pci card using the jumpo frame option ... but when I tested the transfer using the onboard built-in network it was not to be over 23 MB/Second.

Plus the FTP transfer methode was the most fastest way (I do not know why, and I could be wrong, but it is just what I found so far after many tries).

The conclusion we agreed for (me and my co-workers) is to use the current d-link network pci cards with the d-link 16 ports switches (we will buy another one, as total computers will be 16 recording , 5 or 6 for encoding, 2 adsl ports). ALso will have 2 network.

Also will buy the KVM I've mentioned here to use it instead of the windows remotly desktop connection to be able to have a real control of each machine instead of the current remote way (while it is costing us nothing).

Then later on when we get more money we will for sure move to the SSD with RAID ... and will sure move to a more professinal backup solution ... but for the time being and to be honest and because we are just getting started ... we do not have more enough budget :(

Thanks a lot every one ... Really too much appreciated!!! :D

Link to comment
Share on other sites

  • 0

...

Sorry for spamming or bothering you with my silly details but it is just to place you in my place with exact situation.

...

I think details are a good thing ;).

Link to comment
Share on other sites

  • 0

Just want to update you that using D-Link Gigabit Switch DGS-1060 is really amazing and better then HP Pro Curve and also better then CISCO small busniess edition

Link to comment
Share on other sites

  • 0

"I am afraid that after using the 1 gb it is still not fast and only 38 mb/s maximum:"

Are you on Crack or something?? I would have to say that 38MBps is WAY FASTER than your 9MBps you were getting before ;)

As mentioned it is quite possible your HDD are you bottle neck sure.

But lets make sure your actually getting wire speeds that could exceed what your seeing for speeds, it could still be the wire slowing you down.

Grab iperf or netio, some tool to check your wire speed. Example

C:\Windows\System32>iperf -c 192.168.1.4 -w 256k

------------------------------------------------------------

Client connecting to 192.168.1.4, TCP port 5001

TCP window size: 256 KByte

------------------------------------------------------------

[156] local 192.168.1.100 port 5833 connected with 192.168.1.4 port 5001

[ ID] Interval Transfer Bandwidth

[156] 0.0-10.0 sec 967 MBytes 811 Mbits/sec

So you see 811Mbps on the wire -- would not be possible to see file transfers that would exceed this. So /8 rough number would give me 101 -- clearly my disks can not do that.. So my disk are more likely my bottle neck then my wire speed.

But the 40-45 your seeing with ftp could be your disks? Or could be the limits of ftp your using? FTP should be faster, but not always the case

So with ftp pull from my server I saw this

Command: RETR win7-x64-any.iso

Response: 150 Connection accepted

Response: 226 Transfer OK

Status: File transfer successful, transferred 3,319,478,272 bytes in 76 seconds

So that works out to -- 43.6MBps, not bad but not really what we are hoping for.

Now with simple robocopy of the same file, from the same server saw this

Files : win7-x64-any.iso

Speed : 64075170 Bytes/sec.

Speed : 3666.410 MegaBytes/min.

Ended : Sun Apr 08 10:33:22 2012

Which is quite a bit faster! As to that teracopy crap -- yeah, so here is same file using that BS

post-14624-0-14213400-1333899756.jpg

Lets see your wirespeed with iperf or netio, you can get here

http://sourceforge.net/projects/iperf/files/jperf/jperf%202.0.0/jperf-2.0.0.zip/download

just unzip and you only need the iperf.exe unless you want to use the java frontent. Just run exe on box 1 with -s and then on box 2 iperf -c ipaddressofbox1

you might want to add -w 256k for bigger window size

you can get netio here

http://freecode.com/projects/netio

Link to comment
Share on other sites

  • 0

I'd say the hard drives are the bottleneck for sure, when I was testing the speed of my network, normal file streaming maxed out at around 45MBps (360Mbps), when I had the server read the file into memory first I got 106MBps (848Mbps), it might have been possible to get it faster, but the receiving computer couldn't actually keep up and locked up for a few seconds.

Link to comment
Share on other sites

  • 0

Just to update you with my feedback, I've found that using the SSD hard drive is the only best option to make it fast.

Really as more you know as more you spend ... I hate it :/

Link to comment
Share on other sites

  • 0

And what was the result of your iperf test, to see what speed you were seeing on your gig network?

I just reread this thread, and you never did the actual speed test of your disks.. So how do we know if the bottle neck was your disks or your network speed? You just reported the info on the disk, you never ran the actual speed tests that were suggested multiple times!!!

If your saying your seeing better speed with SSD - what speed???

Link to comment
Share on other sites

This topic is now closed to further replies.