About Us
Our Approach
Services
Projects
News
Contact Us
 

Webinar: Cognitive Investment Management, Research & Risk Management

Webinar: Cognitive Investment Management, Research & Risk Management

  |   Videos

In this short webinar watch how we help leading Investment Managers, Banks, Brokers, Asset Managers and Hedge Funds supercharge their existing investment approaches with Big Data, Semantics and Machine Learning.

 

Director James Phare talks about how we use machine learning algorithms, semantics and other techniques to process huge volumes of unstructured, semi structured and structured data in order to identify hidden risks and opportunities.

 

Learn to extract relevant content from huge volumes of data, standardise it and transform it into structures that can be analysed. We demonstrate how we use visualisation tools such as Tableau to explore and analyse news stories related to the Volkswagen emissions crisis. We also show you how we use graph technology such as Neo4j to structure this data into graphs or networks in order to explore hidden relationships and dependencies in company, sector, country and product data.

 

 

If you would like to see the webinar slides follow this link.

 

 

 

About us

 

Data to Value are a specialist Data consultancy, based in London. We apply graph technology to a variety of data requirements as part of next generation data strategies. Contact us for more details if you are interesting in finding out how we can help your organisation leverage this approach.

 

Read More

The Hadoop Ecosystem: HDFS, Yarn, Hive, Pig, HBase and growing…

  |   Blog

Hadoop is the leading open-source software framework developed for scalable, reliable and distributed computing. With the world producing data in the zettabyte range there is a growing need for cheap, scalable, reliable and fast computing to process and make sense of all of this data. The underlying technology for Hadoop framework was created by Google as there was no software in the market that fit Google needs. Indexing the web and analysing search patterns required deep and computationally extensive analytics that would help Google to improve their user behaviour algorithms. Hadoop is built just for that as it runs on a large number of machines that share the workload to optimise performance. Moreover, Hadoop replicates the data throughout the machines ensuring that the processing of data will not be disrupted if one or multiple machines stop working. Hadoop has been extensively developed over the years adding new technologies and features to existing software creating the ecosystem we have today.

 

HDFS – or Hadoop Distributed File System is the primary storage system used for Hadoop. It is the key tool for managing Big Data and supporting analytic applications in a scalable, cheap and rapid way. Hadoop is usually used on low-cost commodity machines, where server failures are fairly common. To accommodate a high failure environment the file system is designed to distribute data throughout different servers in different server racks making the data highly available. Moreover, when HDFS takes in data it breaks it down into smaller blocks that get assigned to different nodes in a cluster which allows for parallel processing, increasing the speed in which the data is managed.

 

Hadoop Yarn is a programming model for processing and generating large sets of data. Yarn is the successor of Hadoop MapReduce. The original MapReduce is no longer viable in today’s environment. MapReduce was created 10 years ago, as the size of data being created increased dramatically so did the time in which MapReduce could process the ever growing amounts of data, ranging from minutes to hours. Secondly, programing MapReduce jobs is a time consuming and complex task that requires extensive training. And lastly, MapReduce did not fit all business scenarios as it was created for the single purpose of indexing the web. Yarn provides many benefits over its predecessor. Yarn provides better scalability due to distributed life-cycle management and support for multiple MapReduce API’s in a single cluster. It allows for faster processing and coupled with the in-memory capabilities of other software such as Apache Spark it is comes close to real-time processing. Yarn also supports many frameworks eliminating the need for MapReduce and making it more flexible for different use cases.

 

Apache Hive is a data warehouse management and analytics system that is built for Hadoop. Hive was initially developed by Facebook, but soon after became an open-source project and is being used by many other companies ever since. Apache hive uses a SQL like scripting language called HiveQL that can convert queries to MapReduce, Apache Tez and Spark jobs.

 

Apache Pig is a platform for analysing large sets of data. It includes a high level scripting language called Pig Latin that automates a lot of the manual coding comparing it to using Java for MapReduce jobs. Apache Pig is somewhat similar to Apache Hive though some users say that it is easier to transition to Hive rather than Pig if you come from a RDBMS SQL background. However, both platforms have a place in the market. Hive is more optimised to run standard queries and is easier to pick up where as Pig is better for tasks that require more customisation.

 

Apache Hbase is a non-relational database that runs on top of HDFS. This schema-less database supports in-memory caching via block cache and bloom filters that provide near real-time access to large datasets, making it especially useful for sparse data which are common in many Big Data use cases. However, it is not a replacement for a relational database as it does not speak SQL, support cross record transactions or joins.

 

Hadoop has become the low cost industry standard ecosystem for securely analysing high volume data from a variety of enterprise sources. We specialise in helping organisations leverage this advanced stack to rapidly understand their data landscape to deliver faster and more insightful reporting, analytics and analysis.  Contact us for more details.

Read More

Webinar: Data Linage Unplugged Tuesday 8th of December 2015

  |   Events

We are thrilled to be hosting a free joint webinar with our software partners Manta Tools on 8th of December via the Ustream platform. In the 40 minute session you will learn practical approaches and see Manta Tools innovative software in action. The insights provided will help you to accelerate the resolution of complex data lineage problems.

 

Join us if you have challenges with:

  • providing full end to end data lineage.
  • analysing the impact of change to downstream applications.
  • root-cause analysis.
  • regulatory compliance.

 

 

Nigel Higgs – Managing Director of Data to Value – will explain the lean approach to analysing and documenting the data lineage environment.

 


Petr Stipek – the VP of Business Development for Manta Tools – will explain why ‘custom code’ is at the root of many failures to complete the end to end challenge.

To register for this event please follow the link here.

Read More