[PATCH] ll_rw_blk: allow more flexibility for read_ahead_kb store
It can make sense to set read-ahead larger than a single request. We should not be enforcing such policy on the user. Additionally, using the BLKRASET ioctl doesn't impose such a restriction. So additionally we now expose identical behaviour through the two. Issue also reported by Anton <cbou@mail.ru> Signed-off-by: Jens Axboe <axboe@suse.de>
This commit is contained in:
parent
bf57225670
commit
da20a20f3b
|
@ -3806,9 +3806,6 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count)
|
|||
ssize_t ret = queue_var_store(&ra_kb, page, count);
|
||||
|
||||
spin_lock_irq(q->queue_lock);
|
||||
if (ra_kb > (q->max_sectors >> 1))
|
||||
ra_kb = (q->max_sectors >> 1);
|
||||
|
||||
q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10);
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
|
||||
|
|
Loading…
Reference in New Issue