• Sign in to Neowin Faster!

    Create an account on Neowin to contribute and support the site.

how to make all windows Drives with 64 k instead of 4k block size ?

Recommended Posts

ReMad    0

Hi

how to make all windows Drives with 64 k instead of 4k block size  ?

is there a way to convert the already installed windows from 4k block size to 64 k  ?

 

and how to fix the allig for all drives to 64 without formatting the Drives ?

Share this post


Link to post
Share on other sites
Riva    1,084

Hi

how to make all windows Drives with 64 k instead of 4k block size  ?

is there a way to convert the already installed windows from 4k block size to 64 k  ?

 

and how to fix the allig for all drives to 64 without formatting the Drives ?

I don't think there is a way without formatting. Why are you interested in 64k?

Share this post


Link to post
Share on other sites
Riva    1,084

Acronis Disk Director can do that

Share this post


Link to post
Share on other sites
D. S.    426

Thread moved.

Share this post


Link to post
Share on other sites
+BudMan    3,368

Why would you want to do this on your OS drive? Are you talking a drive that you only store LARGE files on?

I would suggest you take a read at this experiment with different sizes.

http://ejrh.wordpress.com/2012/10/26/cluster-size-experiment/

  • Like 1

Share this post


Link to post
Share on other sites
PGHammer    1,266

Why would you want to do this on your OS drive? Are you talking a drive that you only store LARGE files on?

I would suggest you take a read at this experiment with different sizes.

http://ejrh.wordpress.com/2012/10/26/cluster-size-experiment/

I tend to format with the smallest-possible block size (512-byte blocks) - this alone is smaller than default by quite a bit.

 

One of my biggest quibbles, in fact, is that you can't choose the block size at installation time (when doing a clean install) of any NT-based OS.

 

Why such a small cluster size?  Simple - far less slack space; when cluster size increases, so does slack space - especially when you have mixed-size files or many files smaller than the cluster size.

Share this post


Link to post
Share on other sites
+Zag L.    661

One of my biggest quibbles, in fact, is that you can't choose the block size at installation time (when doing a clean install) of any NT-based OS.

 

 

 

PGHammer,

 

I could be wrong but when you begin an install, if you drop out to a command prompt and run diskpart I think you can format with a user defined block size

such as below for :

 

DISKPART>FORMAT FS=NTFS UNIT=512 QUICK

 

Does that not work?

  • Like 2

Share this post


Link to post
Share on other sites
ReMad    0

Acronis Disk Director can do that

 

Where is that Option , was trying to find but dont see that option

 

does that change from 4 to 64 without formatting ?

Share this post


Link to post
Share on other sites
ReMad    0

Why would you want to do this on your OS drive? Are you talking a drive that you only store LARGE files on?

I would suggest you take a read at this experiment with different sizes.

http://ejrh.wordpress.com/2012/10/26/cluster-size-experiment/

Just to make a Customer test backup on VMs with Netbackup and see it is not related to the block size being 64 but for vmware configured block size

Share this post


Link to post
Share on other sites
ReMad    0

PGHammer,

 

I could be wrong but when you begin an install, if you drop out to a command prompt and run diskpart I think you can format with a user defined block size

such as below for :

 

DISKPART>FORMAT FS=NTFS UNIT=512 QUICK

 

Does that not work?

I want to know if there is a way to do for all drives in start of Installation

 

that will save alot of time

Share this post


Link to post
Share on other sites
+BudMan    3,368

^ you would need to select the disk and partition you want to format. So if the disk was clean there wouldn't even be any partitions even so you would have to create those with diskpart. Or just do custom advanced setup and create the parts that way. Then close and start the command line and use diskpart to format the partition you want to the allocation unit you want.

Example just fired up a vm to show the format command and that its now 64k

post-14624-0-24283800-1386423166.png

Here's the thing if you had to come here to ask this question -- yah prob should let it just use the default 4k.. And 64k on the OS drive is going to be wasteful of space for no actual purpose.

And what is your issue with netbackup and vms?

Share this post


Link to post
Share on other sites
ReMad    0

^ you would need to select the disk and partition you want to format. So if the disk was clean there wouldn't even be any partitions even so you would have to create those with diskpart. Or just do custom advanced setup and create the parts that way. Then close and start the command line and use diskpart to format the partition you want to the allocation unit you want.

Example just fired up a vm to show the format command and that its now 64k

attachicon.gif64k.png

Here's the thing if you had to come here to ask this question -- yah prob should let it just use the default 4k.. And 64k on the OS drive is going to be wasteful of space for no actual purpose.

And what is your issue with netbackup and vms?

Problem is slow backup

 

here is NB reply

 

If that file system is formated with 4k block size then netbackup will also consider that 4k, because there is no Netbackup  control over files system please confirm.

 

so want to make all drives with 64 instead of 4 kb

 

can i do so on windows 2003 C Drive ( Booting ) and windows 2008 also when creating ?

 

please tell me the step i can do on after booting from the CD

Share this post


Link to post
Share on other sites
+BudMan    3,368

"please tell me the step i can do on after booting from the CD"

Really?? You don't know how to get to the command line upon booting a windows install disk?

post-14624-0-46198200-1386433755.png

As to your netbackup being slow -- How do you think changing the filesystem cluster size is going to effect that? Sounds more like your down the wrong path in troubleshooting a problem and grasping at straws..

If the software is slow, use better software ;)

What do you consider slow btw? And how is that related to the cluster size on the disk, were is the backup destination? Over a network? What version of netbackup are you using, What is the actual windows OS your backing up. Are you doing deduplication? Etc..

  • Like 1

Share this post


Link to post
Share on other sites
manosdoc    10

Just SHIFT + F10 at the 2nd screen Budman posted.

  • Like 3

Share this post


Link to post
Share on other sites
ReMad    0

Just SHIFT + F10 at the 2nd screen Budman posted.

I did

 

but choosing dispart doesnt give that option on Windows 2003

 

i do need to do on windows 2003

 

i guess it will work on 2008 and windows 7

 

so what do you suggest ?

Share this post


Link to post
Share on other sites
+BudMan    3,368

dispart?

its diskpart

And 2003 recovery console clearly has the diskpart command

http://support.microsoft.com/kb/326215

How To Use the Recovery Console on a Windows Server 2003-Based Computer That Does Not Start

---

Recovery Console Commands

The following list describes the available commands for the Recovery Console:

Attrib changes attributes on one file or folder.

Batch executes commands that you specify in the text file, InputFile. OutputFile holds the output of the commands. If you omit the OutputFile argument, output is displayed on the screen.

Bootcfg is used for boot configuration and recovery. You can use the bootcfg command to make changes to the Boot.ini file.

<snipped>

Diskpart manages partitions on hard disk volumes.

The /add option creates a new partition.

The /delete option deletes an existing partition.

The device-name argument is the device name for a new partition. One example of a device name for a new partition is \device\harddisk0.

The drive-name argument is the drive letter for a partition that you are deleting, such as D:.

Partition-name is the partition-based name for a partition that you are deleting, and can be used instead of the drive-name argument. One example of a partition-based name is \device\harddisk0\partition1.

The size argument is the size in megabytes of a new partition.

---

Let me see if I can find that old media around and post up a screeny for you.

edit: Ok found an old 2003r2 -- heres the issue, diskpart there and you can create the partition(s) And format is there.. But the format included does not allow any sort of allocation size, normally it would be /A:size -- but doesn't see to work with recovery console and 2k3

Just boot newer OS, and format using that command then boot your 2k3 disk. You could even using the

format [{[FS=<FS>] [REVISION=<X.XX>] | RECOMMENDED}] [LABEL=<"label">] [uNIT=<N>] [QUICK] [COMPRESS] [OVERRIDE] [NOWAIT] [NOERR]

REVISION = <X.XX>

Specifies the file system revision (if applicable).

Which I believe 2k3 was 3.1?? You might want to look that up, but you could prob default format and 2k3 should install on it. Could test that now that I found a 2k3 disk.

edit: So when I make the disk 64k sectors, windows 2003 does not install. I don't think it will work.. Going to try 2008 and see if that installs to my 64k disk.

Share this post


Link to post
Share on other sites
ReMad    0

dispart?

its diskpart

And 2003 recovery console clearly has the diskpart command

http://support.microsoft.com/kb/326215

How To Use the Recovery Console on a Windows Server 2003-Based Computer That Does Not Start

---

Recovery Console Commands

The following list describes the available commands for the Recovery Console:

Attrib changes attributes on one file or folder.

Batch executes commands that you specify in the text file, InputFile. OutputFile holds the output of the commands. If you omit the OutputFile argument, output is displayed on the screen.

Bootcfg is used for boot configuration and recovery. You can use the bootcfg command to make changes to the Boot.ini file.

<snipped>

Diskpart manages partitions on hard disk volumes.

The /add option creates a new partition.

The /delete option deletes an existing partition.

The device-name argument is the device name for a new partition. One example of a device name for a new partition is \device\harddisk0.

The drive-name argument is the drive letter for a partition that you are deleting, such as D:.

Partition-name is the partition-based name for a partition that you are deleting, and can be used instead of the drive-name argument. One example of a partition-based name is \device\harddisk0\partition1.

The size argument is the size in megabytes of a new partition.

---

Let me see if I can find that old media around and post up a screeny for you.

edit: Ok found an old 2003r2 -- heres the issue, diskpart there and you can create the partition(s) And format is there.. But the format included does not allow any sort of allocation size, normally it would be /A:size -- but doesn't see to work with recovery console and 2k3

Just boot newer OS, and format using that command then boot your 2k3 disk. You could even using the

format [{[FS=<FS>] [REVISION=<X.XX>] | RECOMMENDED}] [LABEL=<"label">] [uNIT=<N>] [QUICK] [COMPRESS] [OVERRIDE] [NOWAIT] [NOERR]

REVISION = <X.XX>

Specifies the file system revision (if applicable).

Which I believe 2k3 was 3.1?? You might want to look that up, but you could prob default format and 2k3 should install on it. Could test that now that I found a 2k3 disk.

but windows 2003 doesnt give me that option when i boot from CD

 

 

So what to do ?

Share this post


Link to post
Share on other sites
+BudMan    3,368

Yes it clearly does give you the option..

post-14624-0-31995300-1386445549.png

But as I said - it doesn't have the option to set the allocation unit size anyway. But diskpart is there.. Just format the disk with windows 7 media, and then install 2k3

edit: Ok after some research.. even 2k8sp1 formatting the system disk to 64k with recovery console still will not install.

WHERE did you read that your system disk can use 64k sectors?? I don't believe that it is possible. Data drives sure!! Make sense if all you have is HUGE files, but system disk - its filled with tiny little files, thousand and thousands of them - would make no sense to use such a large size.

Share this post


Link to post
Share on other sites
ReMad    0

Yes it clearly does give you the option..

attachicon.gif1.png

But as I said - it doesn't have the option to set the allocation unit size anyway. But diskpart is there.. Just format the disk with windows 7 media, and then install 2k3

edit: Ok after some research.. even 2k8sp1 formatting the system disk to 64k with recovery console still will not install.

WHERE did you read that your system disk can use 64k sectors?? I don't believe that it is possible. Data drives sure!! Make sense if all you have is HUGE files, but system disk - its filled with tiny little files, thousand and thousands of them - would make no sense to use such a large size.

 

Thanks alot Budman

 

Customer tested and got no Benefits of that 

 

Appreciate your help

Share this post


Link to post
Share on other sites
+BudMan    3,368

Yeah from doing a bit of research it seems MS purposely left out the ability to change the unit size on install, because the system doesn't run with changed sizes. In 2k3 when it goes to reboot, loader is not read. Even with 2k8r2 didn't work - now maybe 2012 works with larger than default?

But I doubt it to be honest, So while netbackup might call out wanting or suggesting 64k -- is it on systems disks, I find it highly highly unlikely since nobody could be running them. And even if possible, its outside the scope of what 90% of their customer base would be doing.

The IT guys unbox the new server and put in the CD and install, their not going to look for some way of changing the sector size of the OS disk.

Now back in the DAY.. When install from smartdisks from compaq I think was use to do fat and then convert the system drive use to end up with 512 vs 4k.. That kind of rubbed me the wrong way and I would change that to 4k with 3rd party tool.. But this conversion from fat to ntfs was that NT or 2k server -- the years are starting to fly by ;)

Share this post


Link to post
Share on other sites
Gotenks98    502

I tend to format with the smallest-possible block size (512-byte blocks) - this alone is smaller than default by quite a bit.

 

One of my biggest quibbles, in fact, is that you can't choose the block size at installation time (when doing a clean install) of any NT-based OS.

 

Why such a small cluster size?  Simple - far less slack space; when cluster size increases, so does slack space - especially when you have mixed-size files or many files smaller than the cluster size.

I am pretty sure unless something has changed that a less than 1kb file is still going to use 4kb of space. There is no point in using a smaller size. Also it makes things run slower. WIndows is optimized to read in 4k clusters.

Share this post


Link to post
Share on other sites
RobD63    0
Posted (edited)
On 12/6/2013 at 10:47 PM, BudMan said:

Why would you want to do this on your OS drive? Are you talking a drive that you only store LARGE files on?

I would suggest you take a read at this experiment with different sizes.

http://ejrh.wordpress.com/2012/10/26/cluster-size-experiment/

Useful thread, especially the screenshots and Shift F10 tip for those who do NOT regularly install Windows from scratch, but times change.

 

What people are missing when supporting the default 4k cluster size is the appalling disk allocation strategy that NTFS uses, which leads to massive fragmentation of even small files spreading a file across very many small holes as files are deleted and re-allocated.  I have seen many windows PC installations in the "I need a new PC" state, which are caused by the MFT getting massively fragmented so logging in whilst Windows Update is working becomes too painful for normal people.  Yes, there are tools to fix the MFT fragmentation but it tends to reoccur, running a normal disk defragger simply doesn't help and may actually make things worse!

 

So a reason to use 64KiB block sizes today on modern discs is to reduce file fragmentation and transfer data in larger blocks especially when using HDD with moving disk arms.  Now when I look at how many files/objects are actually stored on a System 😄 drive, including user profiles it tends to be about 500,000.  On a modern disk, I really care more about fast access and low fragmentation than 500,000 x 64 KiB of wasted space (being pesimistic on file internal fragmentation).

 

The fact is, programmers have learned that opening/closing files tends to be a relatively slow process, so the data tends to be packed into larger container files, rather than use the OS file system indexing eg) Windows Registry, any DataBase or look at games.  These files are not read sequentially but mapped into memory with a backing store.  Now in theory a 4KiB cluster size would be optimal, as it avoids write amplification caused by small changes in one memory page writing out more than one page but the whole 64KiB cluster, but the fact is writes to most data is comparatively rare, it's meta data  like access times/counts, logging, save files and cache files which are written most.  Those files which are being written, are being allocated in 100's of tiny pieces filling the first holes found by NTFS, then the need to be tidied up later by a defragger, which has become a significant user of disk performance.

 

So actually rationally and even looking at the cited data, I see that experiment as backing 64KiB as being the modern GOTO cluster size for all but the smallest partitions.  The compressed folders become less important as disk capacities have increased and applications use compressed data more often due to it actually being faster to decompress/recompress in a modern CPU maximising cache usage than transfer data to/from main memory.

 

Now consider SSD's which don't suffer so badly in fragmented states, do 4KiB writes remain 4KiB or are there actually much larger pages behind that, which are erased and written?

So once you answer that, then Windows initiating larger minimum transfers, might not seem so awful, particularly when you consider the lower number of extents due to larger clusters that the file system then has to manage.  On that typical 😄 I mentioned earlier even 1MiB wastage on every file becomes 0.5GiB, which is trivial as who buys disks smaller than 120GB these days?  Even in 2013, when this thread started the SSD's I used where 120GB an HDD < 500GB was becoming a rarity.

 

Finally the guy saving space by minimising the cluster size, is going to be suffering large penalties on a system drive because of the transfers not being in simple units of the RAM page size (4KiB) though huge pages are also used for efficiency reasons reducing entries in VM tables, despite the wastage of relatively expensive and scarce RAM memory .. compared to cheap as chips HDD or even SSD on $ per GB.  It was found in the 70's and 80's that despite HDD sector size being 512, that 4KiB or 8KiB block sizes were far faster and in 90's more performance gains over SCSI was gained through coalescing transfers into 64KiB DMA operations.

 

In conclusion in the 1990's worrying about wastage on 4KiB HDD blocks was a false economy.  Now in the 21st century the 4KiB block causes other inefficiencies, due to the wastage on checksum information, HDD manufacturers as capacity increased moved away from 512b sectors and finally large disk partitions actually need higher cluster size to make the capacity available.

Share this post


Link to post
Share on other sites
dipsylalapo    1,670

Thread locked. 

 

Please do not resurrect old threads. 

Share this post


Link to post
Share on other sites
This topic is now closed to further replies.

  • Recently Browsing   0 members

    No registered users viewing this page.