My requirement:
I have 10 (with 100 fields each) tables, from which I use only 100 fields for download into a flat file. Whenever any one of the 100 fields is modified (for one or many rows), those records are eligible for next incremental download.

My Approach:
I plan to have few child Materialized views (using join) and a Master Materialized view (joining all Child Materialized views) with an additional Column which is virtual (derived) field in the view which will be sum of the ASCII values of the 100 fields (something like a HASH Value or Checksum).

I will also have a table (static) with key value as one column and hash value in the second column (HASH Value/Checksum will be calculated using the same logic used for the view)

Later I can check the hash value with the table and the view to identify the modified records and can use them for download.

Example:
DDL:
CREATE MATERIALIZED VIEW V_MMATVIEW1
LOGGING CACHE
PCTFREE 0
PCTUSED 99
USING INDEX
--TABLESPACE <tablespace_name>
REFRESH COMPLETE
START WITH SYSDATE
NEXT SYSDATE + 3/24
AS
select
MCATVIEW1.Field1,
MCATVIEW2.Field2,
MCATVIEW3.Field2,
to_number(rawtohex(MCATVIEW1.Field1),'XXXXXXXXXXXX XXXXXXXXXX') +
to_number(rawtohex(MCATVIEW2.Field2),'XXXXXXXXXX') +
to_number(rawtohex(MCATVIEW3.Field2),'XXXXXXXXXX')
where MCATVIEW1.Field1 = MCATVIEW2.Field1 and
MCATVIEW1.Field1 = MCATVIEW3.Field1 and
MCATVIEW2.Field1 = MCATVIEW3.Field1

V_MMATVIEW1 - Master Materialized view
MCATVIEWx - child Materialized view (x represent a number as more than one child view exists)


Problems I face:
• There is not direct Oracle function to convert a string to ASCII (its not very necessary I wanted ASCII value), so I had to use rawtohex and then to_number
• When I add HASH Value for more than 30-40 columns, the value exceeds the max value of Number DataType.


Could you please let me know if there is any other efficient approach or whether this approach can be done in an efficient manner?

Regards,
Dev