Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

WANdisco Deepens Product Integration With Databricks to Accelerate Time to Value for Cloud-Scale Analytics

WANdisco LiveData Migrator closes last mile gap to move Hadoop data and Hive metadata directly into Delta Lake on Databricks, enabling faster adoption of AI/ML

WANdisco, the LiveData company, announced that its LiveData Migrator platform, which automates the migration and replication of Hadoop data from on-premises to the cloud, can now automate the migration of Apache Hive metadata directly into Databricks to help users save time, reduce costs, and more quickly enable new AI and machine learning capabilities. For the first time, enterprises that want to migrate on-premises Hadoop and Spark content from Hive to Databricks can do so at scale and with high efficiency, while mitigating the many risks associated with large-scale cloud migrations.

  • Data sets do not need to be migrated in full before they are converted into the Delta format. LiveData Migrator automates incremental transformation to Delta Lake.

  • Accelerate time to business insights by eliminating the need for manual data mappings with direct, native access to structured data in Databricks from on-premises environments.

  • Use a single pane of glass to manage both Hadoop data and Hive metadata migrations.

Recommended AI News: AI Robotics Startup Mech-Mind Completes Series C Funding Led by Tech Giant Meituan

Ongoing changes to source metadata are reflected immediately in Databricks’ Lakehouse platform, and on-premises data formats used in Hadoop and Hive are automatically made available in Delta Lake on Databricks. By combining data and metadata and making on-premises content immediately usable in Databricks, users can eliminate migration tasks that previously required constructing data pipelines to transform, filter and adjust data – along with the significant up-front planning and staging. Work that would otherwise be required for setting up auto-load pipelines to identify newly-landed data, and convert it to a final form as part of a processing pipeline are set aside.

Related Posts
1 of 41,024

“This new feature brings together the power of Databricks and WANdisco LiveData Migrator,” said WANdisco CTO Paul Scott-Murphy. “Data and metadata are migrated automatically without any disruption or change to existing systems. Teams can implement their cloud modernization strategies without risk, immediately employing workloads and data that were locked up on-premises, now in the cloud using the lakehouse platform offered by Databricks.”

“Enterprises want to break silos and bring all their data into a lakehouse for analytics and AI but they have been constrained by their on-premises infrastructure,” said Pankaj Dugar, Vice President of Product Partnerships at Databricks. “With the new Hive metadata capabilities in WANdisco’s LiveData Migrator, it will now be much easier to take advantage of Databricks’ Lakehouse Platform.”

Recommended AI News: Marelli Partners with DHL Supply Chain to deliver world-class logistics Solutions

LiveData Migrator automates cloud data migration at any scale by enabling companies to easily migrate data from on-premises Hadoop-oriented data lakes to any cloud within minutes, even while the source data sets are under active change. Businesses can migrate their data without the expertise of engineers or other consultants to enable their digital transformation. LiveData Migrator works without any production system downtime or business disruption while ensuring the migration is complete and continuous and any ongoing data changes are replicated to the target cloud environment.

Making Hive data and metadata available for direct use in Delta Lake in Databricks can be achieved by configuring LiveData Migrator to have a data migration target available for the chosen cloud storage and Databricks. Users choose to convert content to the Delta Lake format when they create the Databricks metadata target. The desired data to migrate is then set by defining a migration rule, and selecting the Hive databases and tables that require migration.

Recommended AI News: DIGISEQ Unlocks the Mass Passive Wearables Market With Rapid Contacless Personalisation on iPhone

Comments are closed.