Our aim is to support all popular data stores. Currently supported are:
- PostgreSQL, out-of-the-box in a Docker container.
- Google BigQuery, using Snowplow as a pipeline and GCP.
- Amazon S3 (as an archive), using Snowplow as a pipeline and AWS SQS/Kinesis.
If you want to test-drive Objectiv without having to set up a backend & data store, try Objectiv Go to quickly spin up a fully functional Objectiv pipeline locally.
The Collector is self-hosted on your own domain, so no data is ever sent to any third-party, meaning:
- You have full control over your data.
- Tracking is compliant with privacy legislation such as GDPR, CCPA and PECR.
- Ad blockers can be avoided: first-party data tracking is usually not covered by ad blockers.
The Collector validates any incoming Event against the
open analytics taxonomy. If it fails, the Collector will respond
with an error, and store the Event in the configured
NOK (not-OK) location.
This means no Event sent to the Collector is ever discarded, enabling you to for instance 'repair' any failing Events and store them after the fact.
The Collector handles validation by calling a validation service. The validation service itself is directly generated from the base schema and will validate the event using the correct schema version. Depending on the result, it will return success or details on any schema violations.
Out of the box, the Collector provide a sessions enrichment, by setting a session cookie on the client once it starts receiving Events from it, and then adding a corresponding SessionContext to all Events it receives.