Just starting a new post the other one got a bit to big.
I started thinking how could I go out of memory no way that the image could be that big. Then I realized it, I was using the straight port of the c++ code that takes a float[] and keeps filling in the holes with data every Tile read. But that is not how I designed my class. I wanted to do a two parter:
- Read the header and save it in a Header object
loop over all the tiles
- save tile data as list of float in one big list
Now with this memory issue I figured a map would be a way better structure. I could make a method on the public interface, getTileData(nTile) and you would get the heightdata for that particular tile. So I'm going with that now. But now I had to rewrite my readTile method and with the latest test file of Aaron I got myself in a bind. I believe my logic of reading the Tile isn't right here is my method
- Code: Select all
private float[] hfzReadTile(EndianBinaryReader reader, HfzHeader fh, UInt32 TileX, UInt32 TileY, float[] pMapData)
{
UInt32 xOriginTile, xTileBorder, yOriginTile, yTileBorder, i = 0, j = 0;
Int32 li;
UInt32 TileSize = fh.TileSize;
UInt32 mapWidth = fh.nx;
UInt32 mapHeight = fh.ny;
UInt32 xTiles = mapWidth / TileSize;
UInt32 yTiles = mapHeight / TileSize;
UInt32 xTileSize, yTileSize;
/* xOriginTile = TileX * TileSize;
yOriginTile = TileY * TileSize;
xTileBorder = xOriginTile + TileSize;
yTileBorder = yOriginTile + TileSize; */
if (TileX == xTiles)
{
xTileSize = 1;
}
else
{
xTileSize = TileSize;
}
if (TileY == yTiles)
{
yTileSize = 0;
}
else
{
yTileSize = TileSize;
}
// read vert offset and sale
char LineDepth = ' ';
Int32 FirstVal = 0;
try
{
float VertScale = reader.ReadSingle();
float VertOffset = reader.ReadSingle();
xOriginTile = 0;
for (j = 0; j < yTileSize; j++)
{
LineDepth = reader.ReadByte().ToString().Single(); // 1, 2, or 4
FirstVal = reader.ReadInt32();
float pixelValue = (float)FirstVal * VertScale + VertOffset;
// set first pixel
pMapData[xOriginTile] = pixelValue;
Int32 LastVal = FirstVal;
for (i = 1; i < xTileSize; i++)
{
if (TileX == xTiles)
{
System.Console.WriteLine("You should never get here");
}
switch (LineDepth)
{
case '1':
li = (Int32)reader.ReadByte();
break;
case '2':
li = (Int32)reader.ReadInt16();
break;
default:
li = reader.ReadInt32();
break;
}
pixelValue = (float)(li + LastVal) * VertScale + VertOffset;
LastVal = li + LastVal;
xOriginTile++;
pMapData[xOriginTile] = pixelValue;
}
}
}
catch (Exception)
{
System.Console.WriteLine("TileSize: " + TileSize);
System.Console.WriteLine("xTiles: " + xTiles);
System.Console.WriteLine("yTiles: " + yTiles);
System.Console.WriteLine("x: " + TileX);
System.Console.WriteLine("y: " + TileY);
System.Console.WriteLine("j: " + j);
System.Console.WriteLine("i: " + i);
System.Console.WriteLine("LineDepth: " + LineDepth);
System.Console.WriteLine("FirstVal: " + FirstVal);
throw;
}
System.Console.WriteLine("data size " + TileX + ", " + TileY + " : " + pMapData.Length);
System.Console.WriteLine("last x: " + TileX);
System.Console.WriteLine("last y: " + TileY);
System.Console.WriteLine("last i: " + i);
System.Console.WriteLine("last j: " + j);
return pMapData;
}
[UPDATE]
FOUND IT !!!! there is a unclarity in the spec imho though. It seems that a tile does not only have one pixel in it when it is an "extra" tile due to uneven map size. But every row in the tile first pixel needs to be there. After I had that fixed it worked ! I've updated the code with my method in the block above. I removed the error output to keep the forum clean.
[UPDATE WRITE]
I'm starting to doubt it will never be possible to write the file in C#. The loss of precission on floats is just staggering. The very first calculation on the very first write of a tile goes totally wrong.
Tile 0,0 first write of the Vertical offset
VertScale:0,009999821 is what it should be read from the origin file (byte array: VertScale:01001010110101100010001100111100 )
VertScale:0,009999896 is what the code gives me to write to the new file (byte array: VertScale:10011011110101100010001100111100)
- Code: Select all
float BlockLevels = (HFmax - HFmin) / Precis + 1;
// calc scale
VertScale = (HFmax - HFmin) / BlockLevels;
VertOffset = HFmin;
if (VertScale <= 0)
{
VertScale = 1.0f; // this is for niceness
}
two divisions further and there is no more precision left
I'm starting to wonder if I even should read the bytes from the file to float and better store them into a decimal so I can calculate them better later on and then after the calculation save them back as a float ... I'm just guessing atm cause I can't pinpoint the precision problem is it on read or on write.
I've put my work upto now here :
http://users.telenet.be/sunspot/hfzcsha ... /LibHfz.7zMaybe someone else can take a look and see the bug. There are two implementations now a clean read namespace LibHfz, and the direct C++ port the clean namespace was based of LibHfz.CppPort. There is a Nunit test called LibFhzTest with the second test in the suite being the write test. I hardcoded my file locations in it though , so if you want to run the test you'll have to change the path of the files.