1. Max batch size = 10000
The assumption here is that the session will use the Bulk API, which is the fastest way to load data in Salesforce. Period. As of Summer '15, the maximum batch size for a Bulk API job is still 10,000 records. Let's take advantage of this.
2. Set fields to NULL = checked
With data loads, by default it's best to assume that a blank field means that there is no data from the source system to feed this field. In this case, whatever is currently in the field would be considered invalid, to be overwritten with a blank value during the feed.
3. Use SFDC Bulk API = checked
Self-explanatory.
4. Monitor Bulk Job Until All Batches Processed = checked
When chaining tasks inside a worklet or workflow, monitoring the bulk job until all batches are processed helps to ensure that a dependent task will start only after the predecessor task truly completes. Otherwise, not only would you increase the risk of encountering locking errors, you run the risk of the next task running in the context of stale data.
5. Enable field truncation attribute = unchecked
This is equivalent to the Allow field truncation setting in Salesforce Data Loader. Unfortunately still, as of Summer '15, using the Bulk API prevents us from using this automatic truncation option. So you should be aware that truncating values must be done by other means during the transformation, not the load!
6.Enable hard deletes for BULK API = checked
Why not? This significantly improves the performance of mass delete operations, by skipping the Recycle Bin and erasing the record immediately.