Thursday, June 16, 2016

Thursday, June 16

I have debugged the existing issue in the WAH query engine and it works fine (very silly mistake on my part, not initializing some of the utility package).
Today I did a MAJOR reorganization of the entire query engine. I have decided that though it currently feels like I am essentially rewriting my entire WAH query engine, it is very much worth the time because it will allow me to implement the VAL engine very easily (as well as any other algorithms). My old implementation was a pretty naive one, solely focused on making WAH work. However, I am adding an extrapolation of the implementation using the activeWord struct. The main purpose of this is that I can use the activeWord struct as a parameter for my existing utilities for ORing and ANDing segments (there are 6 to be precise: fillORfill, fillORlit, litORlit, fillANDfill, fillANDlit, and litANDlit). They currently take in a word_32 (an entire compressed word). However, if I change the implementation to work on activeWord structs instead, I can use the same methods for VAL. The WAH implementation should not change much as each activeWord will be essentially equivalent to one WAH word. The main reason for this change is that each word_32 read in from the compressed files can be equivalent to 1, 2, or 3 segments, depending on the segment length, so this struct will allow me to separate the segments as needed and store the individual parts in the activeWord that can then be ORed or ANDed.
Once I rewrite everything in the format that I'm aiming for, all I have to do is write the decodeNext method which calls the decodeUp and decodeDown methods that coordinate the flow of segments from the compressed files through the activeWord struct.
It feels slow rewriting this and often involves breaking a lot of what I have but I know this is going to make everything A LOT easier in the future.

No comments:

Post a Comment