Importing your historical data
When you first [connect Katana Cloud Manufacturing](/kb/gettingyourdata?api=katana?api=katana) to SyncHub, the first thing it will do is automatically begin importing your historical data.
### Changing how far back SyncHub will go
On the [connection dashboard](/kb/connectiondashboardsexplained?api=katana?api=katana), you can see the currently selected date for each table below the progress bar on the left-hand side. To change this, follow these steps:
1. Click the table name.
2. You will see that the date and time that the next run will begin is shown as **next run**, and the duration and date that the run will pull data from is shown as **run size**.
3. Click to change the **run size** and here you can adjust when SyncHub will start pulling data from.
4. Click Update, then Save.
5. Assuming you set the date earlier than what was previously displayed below the progress bar, that date will have now changed.
Note: Whenever you change the **run size** field, data already pulled from after the new date you set will be re-synced. So to avoid needlessly re-syncing data it’s a good idea to decide the date as soon as you've connected.
## Importing your historical data quicker
The speed of your import may be impacted by a multitude of factors. Here are a few things to try if your data sync is running slower than you expect.
### Modifying your run size
Depending on your account, Katana Cloud Manufacturing will likely impose _API throttling limits_, necessarily limiting the speed at which we can download data from their service. The best way you can increase your download speed there, is to reduce the number of API calls that are needed.
Most endpoints in Katana Cloud Manufacturing allow us to query for data within a given date range. If your date range is 2 days, then it would take at least 15 API calls to sync a month's worth of data, because each run takes at least one API call. If however, you increased your run size to five weeks, then it would take slightly less than one API call to get the same data.
So why don't we just default the system to query by a massive date range and get everything in one API call? The answer depends on load. If you are generating 100,000 records every month, we don't want to download them all at once as it puts too much pressure on both our servers and those of Katana Cloud Manufacturing. Instead, you want to find a balance where you download approximately 200 records for every run.
We have instructions for [changing your here](/kb/howsynchubworks?api=katana?api=katana).
### Removing high-usage endpoints
For some endpoints, it is particularly inefficient for us to query data, so if you are not using the data then we highly recommend deactivating the endpoint. You can see more information in [this article about child endpoints](/kb/childentitypayloads?api=katana?api=katana).
### Prioritizing sections of data
Your historical data syncs from oldest to newest data, which can be frustrating if you're only interested in starting your reports with last month's data. But not to fear - while your historical data is syncing, you can concurrently prioritize other periods by [creating a Segment](/kb/segments?api=katana?api=katana).
### Change your data ingestion mode
If you are using our BigQuery, Redshift or Snowflake connectors, we support _bulk-insert_ ingestion, which makes your imports considerably faster - sometimes up to 100x faster, in fact.
To change your data ingestion mode, use your [Datastore Management module](/kb/datastoremanagement?api=katana?api=katana).