Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Ascend.io and Looker Unify ETL Across Data Lakes, Warehouses, and Pipelines

Through Ascend’s New, Native Integration for Looker, Data Teams Can Now Reach Beyond the Data Warehouse to Directly Connect, Explore, and Unify Live Data From Data Lakes, Warehouses, and Pipelines

Ascend.io, the data engineering company, announced the availability of a native integration between the Ascend and Looker platforms. This collaboration closes the enormous gap between enterprise data engineering and data analysis platforms, unlocking access for the first time to live data pipeline sources for their business intelligence practices.

Until now, upstream data sources and systems have been the domain of ETL and data engineering, siloed in software development teams away from the more established business analytics teams that work with Looker and other SQL-based business intelligence tools. As a result, business analytics teams lost time and productivity waiting for the data they needed, while data engineering teams faced an ever-growing backlog of data requests that was impossible to keep up with.

Recommended AI News: Opinion: Young Jamaican’s Invention Could Help Tackle Spread of Viruses Like COVID-19

“Analysts across the enterprise increasingly need to harness the business value of data found beyond data warehouses,” said Shohei Narron, technology partner manager at Looker, which joined with Google Cloud in February of 2020. “Ascend brings an unprecedented capability to the Looker ecosystem with which BI teams and analysts can self-serve live data directly from data lakes and pipelines in their SQL statement. As a result, Looker visualizations, LookML models, and Looker-based APIs can harness data pipelines with no further ETL synchronization required.”

Related Posts
1 of 40,612

Data teams are adopting the low-code Ascend platform to bring autonomous data pipelines and automated governance to their data lakes. The Ascend platform standardizes and automates every aspect of pipeline design and operation, providing the fastest and easiest way to unify ETL and data processing across disparate data silos. With this integration, Ascend enables SQL access to every stage of the data lifecycle.

Recommended AI News: Automation Provides A Content Lifeline For Remote Work

“Ascend and Looker both recognize that businesses need the ability to extract value from data more quickly than they’ve been able to in the past,” said Sean Knapp, founder and CEO of Ascend.io. “Unfortunately, many businesses are hamstrung by an outdated, inflexible data architecture and a lack of data engineers. In response, we’ve developed solutions that combine data automation and orchestration, allowing a growing number of data scientists and analysts to become ‘citizen data engineers,’ able to manage the end-to-end data lifecycle themselves. The integration also democratizes data access on the Looker platform and across the enterprise, allowing data teams to drive innovation and deliver insights with a faster ‘time to why.'”

“The combination of Ascend and Looker is an important competitive advantage for us,” said Sheel Choksi, director of operations at Mayvenn, an Andreessen Horowitz- and Essence Ventures-backed startup focused on partnering with stylists to grow its clientele. “With Ascend’s powerful ETL capabilities, we built and operationalized a full ecosystem of data pipelines in record time. Now our team can create reporting and dashboards in Looker that highlight up-to-the-minute business metrics from all our data sources, not just those from our data warehouse.”

Commonly implemented on data lakes using Apache Spark, data pipelines are the leading standard for processing massive volumes of structured, semi-structured, and unstructured data on the cheapest forms of cloud compute and storage available.

Recommended AI News: How the Death of Third-Party Cookies Will Benefit Your Digital Data Strategy 

Comments are closed, but trackbacks and pingbacks are open.