Skip to content

What is Apiro's point of difference

Apiro - point of difference

There is no "BEST" tool. Every engineering discipline is a compromise between competing goals and just like any other application Apiro's designers must make opinionated choices to prioritise some of them at the expense of others. That's the reason why there are so many tech tools in the first place.

Apiro is an eclectic tool borrowing ideas from many disciplines. For example, it has characteristics of an ETL tool, but is not itself just an ETL tool. For one thing it can act as the primary store of ingressed data (though doesn't have to). It does not just necessarily shuffle it from a source to a destination. If you were to judge Apiro as just an ETL tool, it would be a poor choice. Apiro can process items in a batch mode but it does not currenly have a streaming mode for bulk data. The batch must fit in memory. If your use case, therefore, is along the lines of, "I have 2 billion rows of data in datastore X and need to process it and send it to destination Y on a frequency of Z", a classic ETL tool is what you are looking for and Apiro does not aim to compete in that space. All cloud providers have a myriad of options to do what you want, and they will scale better than Apiro when operating in bulk batch mode. Apiro has features to detect, flag, and allow possible manual or automated update of invalid data before completing the process pipeline, but that is a useless feature if the standard violation rate on your data represents an unmanageable amount of work items that must be manually processed.

Apiro is intended more for a model of real time processing of manageable units of data. Ideally, unless there is valid reason, the smallest unit of data that makes sense - an individual transaction, triggered in real time, and processed immediately via persistent queues in and out is the most desired method, though sourcing from uncionventional sources like emails is also supported.

Apiro was designed to have a flexible linear pipeline to be able to handle many common business use cases in the one one package without requiring any external applications, and to be simple to understand while satisfying that prerequisite. Specifically, the pipeline has a consolidation phase which allows you to aggregated different DataPoints from multiple sources, or validate the same DataPoint from multiple redundant sources, enhance, augment the consolidated data, send events based on the data content and shuttle it off to downstream systems. This can all potentially be done in a declarative manner. In addition, extension points (such as new validations and augmentations) can be made via scripts, Java classes or REST services.

Other than making these points, we really dislike marketing spin. Apiro is a technical tool which we presume will be evaluated by professionals on its subjective merits in the context of the unique use cases which need to be addressed.

On that note - just like a picture is worth a thousand words, rather than to drown you in an ocean of overwhelming verbiage, the best way to decide if it's the right fit for your organisation is to install the Apiro Community Version with supplied sample configurations. They have been selected to show you the main use cases Apiro is addressing. Go through the getting started sections of the documentation. This will rapidly allow you to grok the essential 'spirit' of what Apiro is about and decide if it's something you want to investigate further. In that case the more detailed sections of the documentation will guide you in that regard.

... the Apiro Team