Is there any way to fix the seed of the random factor of HDFS
I am doing some experiments on HDFS with HADOOP-0.22. In other to make my experiment repeatable, I need to fix some random factor's seed of HDFS. For detail, each time, when I reformat the file system and import the same set of data, I want the data block to be allocated to the same datanode as the previous experiment and with the same name. I have no idea that any one has done this yet. Appreciate any reply.
Your version of Hadoop supports a pluggable Block-placement policy which you can supply to be more static or non-random for your needs. See HDFS-385 for more technical details, and for associated evolutions to that interface since then.