Discussion:
JFFS2 (NAND) mount time improvment
Ferenc Havasi
2004-08-17 13:59:42 UTC
Permalink
Dear All,

Is here someone, who is interested in $SUBJECT and have a NAND device
and some time to help us to test? Please contact me!

Now we have some patches (the idea is that one was discussed with David
at http://lists.infradead.org/pipermail/linux-mtd/2004-June/009873.html ).

These are tested with mtdram and works, but unfortunatelly we don't have
any NAND device yet to test and measure its effect on mount time
(certainly, it should be faster).

Regards,
Ferenc
David Woodhouse
2004-08-17 14:09:25 UTC
Permalink
Post by Ferenc Havasi
Now we have some patches (the idea is that one was discussed with David
at http://lists.infradead.org/pipermail/linux-mtd/2004-June/009873.html ).
Please just send them to the list. I'm sure some eager testers will
crawl out of the woodwork :)

How did you handle nlink? I'm sure I had a cunning plan for it at one
point but I couldn't remember it at time I composed the message you're
looking at, so I was hand-waving.
--
dwmw2
Ferenc Havasi
2004-08-18 21:37:33 UTC
Permalink
Post by David Woodhouse
Please just send them to the list. I'm sure some eager testers will
crawl out of the woodwork :)
OK, I've planned to do it - just needed some time to make them more
presentable :)
Post by David Woodhouse
How did you handle nlink? I'm sure I had a cunning plan for it at one
point but I couldn't remember it at time I composed the message you're
looking at, so I was hand-waving.
We tried to modify only jffs2_scan_eraseblock to read less NAND page
than before.

There will be a JFFS2_NODETYPE_INODE_CACHE node at the end of every
erase block if you run mkfs.jffs2 using its -C option. (there is an
other new option: -N to specify the size of the nand page)

In this new node there is a record for every node stored in the erase
block. Every neccesary node-info is stored except the infos of
JFFS2_NODETYPE_DIRENT nodes. Now we stores only their offsets and have
to read them - certainly it can cause some slow-down :(

As I wrote we didn't test is on NAND - we can just hope the best. It is
tested yet only with kernel 2.6.8.1 with mtd snapshot 2004-08-17 using
mtdram.
Post by David Woodhouse
I have a 256M Samsung Nand and really need this reduced mount time. I
would be most willing to test out your improvements. Pls let me know
when it's submitted and I would be happy to test drive.
Thanks Mike - the patch (for mkfs.jffs2, its manual and jffs2) is
attached. The fs part of it will log using KERN_DEBUG.

Regards,
Ferenc
Jarkko Lavinen
2004-09-08 13:14:10 UTC
Permalink
Post by Ferenc Havasi
We tried to modify only jffs2_scan_eraseblock to read less NAND page
than before.
I tried this patch on a test board running Omap 1710 @ 192 MHz and
with 1 Gbit NAND flash, using internal HW nand flash controller on
Omap. I use 2.6.9-rc1-omap1 kernel, patched with CVS September 7th snapshot
and then applied Ferenc's patch.

The raw read speed using dd through /dev/mtd is about 1.8MiB/s. Reading
the whole 122MiB partition takes about 68s using dd bs=2k. The fs is 35%
full and with plain 2.6.9-rc1-omap1 the mount time is 20s.

After applying Ferenc's patch I created the filesystem image
with

mkfs.jffs2 -e 128 -l -n -d foo -o bar

I then flashed the test partition with the image just created, rebooted the
board with kernel that contained the new scan code and mounted the test
partition. The mount increased from 20s to 82s. This is even more
than to read the whole device through dd.

Something is wrong. Did I use correct options to create the image? There
does not seem to be flash page size option anymore. I don't use
cleanmarker in the image as the flasher will write it to the first
OOB of each erase block.

I thins the next logical thing to do is to try to profile, where
the time is spent and what might cause qudruple times increase in mount
time.

Jarkko Lavinen

To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to ***@axis.com
Jarkko Lavinen
2004-09-10 14:19:24 UTC
Permalink
Post by Jarkko Lavinen
mkfs.jffs2 -e 128 -l -n -d foo -o bar
...
Post by Jarkko Lavinen
Something is wrong. Did I use correct options to create the image? There
Both kernel configuration and mkfs.jffs2 options were wrong.

The 4 times increase in mount time was due to excessive debugging messages
slowing down the mount.

The options -C and -N were missing and the icache node was not constructed
into the image. I reran the mkfs.jffs2 command with -e 128 -l -n -v -N 2048
-C and created a new test fs image. I also enlarged the fs fill ratio to 96%
to show better the if any.

With plain 2.4.9-rc1-omap1 kernel the mount time is 52s.

With Ferenc's patch the mount time drops to 14s.

With plain 2.4.9-rc1-omap1 the kernel profile head -20 of 52s mount
looks like:

4638 omap_nand_read_buf 50.4130
67 __delay 5.5833
197 jffs2_get_ino_cache 2.1413
114 omap_write_command 0.9828
22 omap_nand_calculate_ecc 0.7857
12 omap_nand_enable_hwecc 0.3750
11 jffs2_add_ino_cache 0.1058
18 crc32_le 0.0833
17 jffs2_add_fd_to_list 0.0773
4 generate_pseudo_ecc 0.0625
3 kmem_cache_alloc 0.0417
1 jffs2_alloc_inode_cache 0.0278
47 nand_read_ecc 0.0237
3 __memzero 0.0234
2 omap_nand_correct_data 0.0217
2 kfree 0.0152
8 jffs2_scan_inode_node 0.0145
2 __kmalloc 0.0139
1 omap_select_chip 0.0132
2 make_coherent 0.0064


And with Ferenc's patch the profile head of 14s mount looks:

993 omap_nand_read_buf 10.7935
29 __delay 2.4167
185 jffs2_get_ino_cache 2.0109
32 omap_write_command 0.2759
10 kmem_cache_alloc 0.1389
3 omap_nand_enable_hwecc 0.0938
9 jffs2_add_ino_cache 0.0865
19 jffs2_add_fd_to_list 0.0864
18 crc32_le 0.0833
6 omap_select_chip 0.0789
2 jffs2_alloc_raw_node_ref 0.0556
7 __kmalloc 0.0486
3 generate_pseudo_ecc 0.0469
2 __wake_up 0.0385
1 jffs2_alloc_full_dirent 0.0357
1 omap_nand_read_byte 0.0312
2 nand_release_chip 0.0238
3 jffs2_scan_make_ino_cache 0.0234
4 nand_get_chip 0.0217
38 nand_read_ecc 0.0192


Jarkko Lavinen

To unsubscribe from this list: send the line "unsubscribe jffs-dev" in
the body of a message to ***@axis.com

Loading...