I'm looking for help with a database design problem...
I have 1300 data items, 3300 company tickers, and dates (making this time series data).
The data items grow by a few a month. The company tickers grow by one or two per week. And dates that grow by one per day.
The data items can be broken up into logical groups given how they are used in our quant model. There are 10 of these groups.
We query this data in many ways. These are the most popular I think:
Single Stock - 50 or so data items for a given ticker
Subset of Stocks - 5 to 100 data items for a subset of about 3 to 2000 tickers
Single Stock Time Series - a few years of historical daily data on 3 to 10 data items at a time for a single stock

In your opinion, how should I design this so it optimizes my queries?

I'm thinking of having one table per ticker per data item group: 3300 tickers x 10 data item groups = 33,000 tables Which would leave roughly 50 to 200 fields per table (depending on the group) like this


table records

table fields

Up to: tblTicker3300-DataItemGrp10

(I am using numbers and generic names instead of more informative names for simplicity.)

However 33,000 tables sounds like a lot!! However the alternative is to have a ton of fields, which I am told is super inefficient. Or should I just go with super wide tables with 1300 fields or more, but less tables? What do you think?

Thanks a ton!