Current List of 2020 Discussions
Full list and discussion agenda to be published by March 20th
As our list of projects grows and our timelines shrink, we constantly need to prioritize what to deliver. Reducing scope can improve delivery speed but may result in lower quality output. Cutting the budget may increase the timeline with less impact on quality. Taking a page from the project manager’s handbook, we will tackle this Venn diagram of priorities and answer some of the following questions:
- When should you release the bare bones of a new project vs something complete?
- How can you speed up your project?
- How can you make your project better, within constraints?
- What is considered to be a budget for analysts?
During the discussion we will seek to develop the appropriate steps to take when considering what changes to make to a project in the constraints of budget, deadline, and scope. You will leave with fresh ideas of how to manage and improve your analytics workload.
(btw: references https://en.wikipedia.org/wiki/Project_management_triangle)
Data literacy can lead to a conceptual transformation within a business. The challenge is to get various stakeholders to either buy into data literacy or put away past failed attempts and trust you to deliver the desired transformational outcome this time round.
In this discussion, led by Head of Data Science at Wagestream, Allison Ma, we will tackle this challenge head on and look to answer the following questions:
- What data literacy challenges do you face within your org?
- How do you convince different teams to accept your help with specific goals?
- Should our approach differ based on stakeholders’ seniority level?
- How do we handle (inevitable) push back productively?
- How do we improve an already data-driven team and help them level up?
During the discussion we will seek to expose as many use cases as possible so you can go back to the office with tangible concepts applicable to your specific circumstances.
So you are looking to grow your data team but unsure about best suitable roles and team organisational structure. You are not alone – many team leaders are confronted with challenging questions regarding their talent recruitment and retention strategy, particularly with the continuous role and skill evolution in our industry.
Allison Ma, Head of Data Science for Wagestream, is currently facing these challenges. In this discussion, she will guide us through the following questions:
- How technical should your hires be?
- Should you hire in anticipation of company growth or after growth?
- What about hiring for current projects vs upcoming projects?
- When is a junior candidate more valuable than a senior candidate?
- Is there an optimal mix of levels and skills within an analytics team?
These and more will all be answered in this exciting discussion about analytics team growth.
We will look to answer all the questions above. We will also aim for both analysts and managers to leave with insights on how individuals fit into an overall team and how to piece a great team together.
The North Star Metric (NSM) is a fairly new concept predominantly used by start-ups. It is a metric designed to captures the core value that your product delivers to customers. Optimizing your efforts to grow this metric is key to driving sustainable growth across your entire customer base.
NSM helps teams move from driving short-term growth to focusing on the generation of long-term retained customer growth.
In this discussion, Andrea Mestriner, Global Head of Analytics at Yoox Net A Porter will address the following questions:
- Is one metric really enough?
- What are the practical barriers you would face and how to overcome them when looking to implement NSM?
- How do you determine your NSM? Is there a single metric, easy to understand and report that supports the results of all your product, technology, marketing, experimentation and development effort?
- Can you create a relationship tree that connects your NSM to the metrics used daily by the various teams?
- What are the traits and attribute of a good NSM?
- How does NSM help you drive sustainable growth?
Join us for a pragmatic discussion about one of the emerging themes in business measurement. Expect to leave with a better understanding of how NSM could work for you.
A recent McKinsey report found that customer focused organisations are 23x more likely to attract and 6x more likely to maintain customer relationships. Doing so successfully requires a combination of technology, processes and organisational focus on the customer. Customer Data Platform (CDP) promise to deliver on the technology side to things.
However, implementing a CDP is a significant financial, resource and time undertaking. Most organisations would demand a quick return on such a commitment.
Luxury online retailer, Yoox Net A Porter, is in the midst of a CDP implementation project. In this discussion, led by Andrea Mestriner, Global Head of Analytics, we will discuss:
- The essential conditions for initiating a CDP project
- What are the short-term business use cases (to justify the investment)?
- Who should be involved in the project? How should the selection process look like?
- What organisational changes are required beyond just the tech stack to make a CDP project a success?
- What are the long-term use cases?
Are you considering a CDP? Do you need to get management buy-in? This will be a practical conversation designed to give you a good view of the critical aspects of CDP so you can go back to your management with rational view of the pros and cons of such a project.
Join this discussion and walk away with a list of ideas to demonstrate value add immediately following a CDP implementation.
It is not new to discuss data as a product in relation to big platforms and expensive IT programmes but what about data products as everyday features and capabilities embedded within the teams that rely on them?
At Burberry, Ben Stephens’ team are building data products to be used across the business. In this discussion Ben will explore what we mean by data products and how they can be developed and integrated into teams and processes such that it becomes part of the business’ daily life. Using examples such as integrating data, insights and recommendations visually into a non-technical teams’ workflow we will look at the benefits that can be gained from showcasing how subject matter experts can be empowered with the right data at their fingertips. We will explore identifying candidates for when the right time is to introduce such capabilities and the potential pitfalls of becoming ‘IT Support’ for the data solutions we build.
We will finish by discussing how this might impact data teams, the kind of roles that become necessary – designers, engineers, journalists and potentially more embedded alongside analysts.
Join this thought-provoking discussion if you are looking to empower your business teams or you are looking for new ways to explore your data.
Businesses have one thing in common – they should be focused on the customer. This means their success is based upon removing the blinkers, expanding beyond a transactional view and understanding as much as possible about their customers. Who they are, how they behave beyond their interaction with the business, and how to get more of them. The challenge, however, is that a single brand or business is in the position of providing, at best, a handful of products or services to any single customer which greatly limits the scope of what can be observed directly.
Burberry is placing an ever-growing emphasis on understanding and serving their customer. In this discussion, Ben Stephens will focus on known customers or prospects where there exists a degree of direct contact – comms data, transactional data, physical interactions in a store. We will explore the different methods and opportunities for improving the quantity and quality of data held on the customer covering anything from smarter/progressive data capture to good old-fashioned buying it. Whenever we touch on the customer there will inevitably be privacy or ethical considerations however that is not the key focus of this discussion.
Finally, we will address the critical question of how we can actually use what we know for the benefit of the customer and not resign the most valuable data to the confines of a PowerPoint deck.
Creating or growing a data organisation is an important process for any business. With the range of skills that a modern analyst/data scientist can call on, the technology at their disposal has likewise taken a huge leap with a constant flow of new tools.
In this discussion, led by Ben Stephens of Burberry, we will look at the most important and impactful capabilities that every data team should be able to rely on, as well as the resources needed to maintain and develop them:
- Reporting and discovery – technology to support democratisation and self-service while giving an analyst what they need to ask questions and generate insight
- Prototyping and iterating – developing data products & the algorithms that power them can deliver huge benefits, without the right capabilities they can also take forever
- What are the key challenges for your business where technology is the unlock?
- Finally, what additional factors need to be considered when onboarding tech – do you have dedicated engineers? What about your stakeholder maturity? Where does the money come from?
All businesses are different and augmenting technology can sometimes be a challenge. We will end this discussion by tackling these issues and reviewing how to approach the evolution and enablement of a growing data team.
The age of the web analyst is long gone. Web data has just been replaced with data, lots of it, very raw, often event level, coming from heterogenous sources and landing at different times. Today, making sense of data requires both data literacy as well as a specialised skillset.
With more and more businesses investing in data engineering teams, there are lots of analysts doing data engineering, but by a different name -- analysis. How do these individuals get on this path, what skills do they acquire and how do they get started? Even if there is no real need for analytics teams to do data engineering because there might be a dedicated team supporting them, many of the skills and tools of the trade will unlock massive productivity gains in analytics teams, shortening the analysis feedback loop and increasing their ability to tackle more complex business questions.
In this session, led by Head of Data at Secret Escapes, Carmen Mardiros, we will explore the essential skills for a data analyst in 2020. We will also look at the workflow and skills of analytics teams and how might you go about setting up a structured learning path for upskilling where both the organisation and the individuals benefit. What organisational and managerial buy-in is needed?
Finally, we will examine the next level skills and tools that attract more technically minded analysts, enabling them to tackle ever more complex problems.
This discussion is open to both managers looking to shape their teams in a data science world as well as senior analysts looking to understand what they need to do to develop their careers further.
In today’s fast paced digital world, it is not the absence of data that is holding organisations back but the sheer volume and variety of often heterogeneous data sources. What does a clear path to extracting value from this data look like? How do we get to a place where data scientists can start building models and analysts start asking difficult business questions with the data? I believe the answer is multi-faceted: approach, technology, skills and finally, the data itself. This will be a highly collaborative discussion where we will be looking at stories from the trenches and specific examples of what worked and did not work, and things that, in hindsight, we would have done differently.
From a technology point of view, we will look at options which can be quickly experimented with — cloud technologies and tools that bring all your data in one place, modern approaches to data warehouse design and tools to visually explore data.
But technology by itself will not generate value. We will also look at what skills you need and what organisational setup can best leverage the newly built capabilities.
And finally, we are also going to look at some of the challenges along the way — securing and maintaining buy-in, dealing with inconsistent data generated by legacy systems, using home-grown vs off the shelf tools, balancing the build-up of data tech-debt with the generation of business value.
Jim Henson’s famous blue monster achieved his iconic global status through a voracious appetite for cookies; there was just never enough of the gooey choc-chip goodness to satisfy the creature. Following the legislation of GDPR, we too find ourselves facing another cookie monster – the consent management platform!
Whether OneTrust, Commanders Act, Quantcast, Cookiebot or other tool, these platforms require setting up in a fashion that maintains customers’ privacy without destroying your capability to track and market to them within the bounds of the law.
Bourne Leisure are going through this process now. Dan Grainger, Analytics Manager at Bourne Leisure, will lead you through the ins-and-outs of the process and considerations that need to be accounted for including:What you should consider prior to implementation? Who is responsible for implementation and who should be involved? Where should consent management live post implementation? How do you socialise the (potential) business impact? What are the “gotchas” and grey areas?
Om-nom-nom indeed. If consent management is in your remit then join Dan for a stimulating about how to do it right.
Note: we will not cover vendor selection in this discussion as the focus will mainly be on the management and data aspects of the challenge.
…sang John Lennon, parodying Bob Dylan. And in many ways, he was right. In our increasingly data-enabled world, without significant headcount it is near-impossible for digital analysts to keep pace with stakeholder insight demands.
Often this challenge leads to analyst frustration, feeling that they are a service centre for ticketed requests. Equally frustrating for stakeholders; eventually leading to loss of belief in analytics’ ability to deliver valuable insight.
Self-service is a common solution which reduces the analytics team’s workload whilst increasing stakeholders’ access to insight and speed to that insight. If implemented effectively, it will also stimulate stakeholders to ask more meaningful questions of the data.
So how do we enable and advocate an efficient self-service model?
In this discussion we will:
- Define the possible self-service models – is it a one-size-fits-all definition? Or do we differ by stakeholder group?
- Review analytics team structures in a self-service world
- Talk about the necessary tools and tactics to get it right
And, along the way, not forgetting to consider what can go wrong!
If you are considering self-service, have implemented a model successfully or failed to make it live up to expectations then this discussion is for you. Start improving your analytics ROI immediately following the conference.
These days just about anything in digital can be tracked. “Hooray!” shout your stakeholders, not truly understanding both sides of the coin – the blessing and the curse. Positioning this vast array of data points to make sense and be of genuine benefit to stakeholders is a tricky task for any analytics team. Enter KPI frameworks!
Led by Dan Grainger of Bourne Leisure, this discussion will address the following points:
- What does an effective KPI framework look like?
- Where to start? Who should be involved? Who owns it?
- Are your KPIs actionable? Should you follow SMART or other methodology?
- How do you evolve your KPI framework over time but keep consistency?
- How do we make our KPI framework admitted across the organisation?
This discussion will give you the impetus to identify, set and use targeted KPIs to achieve better outcomes for your business and customers.
While only few companies dare to sell proprietary algorithms these days, dozens, if not hundreds of, mostly, start-ups offer analytics solutions as well as data and ML platforms. They promise to relieve the pain around self-service BI, data sharing, preprocessing, ETL, deployment, model training at scale, monitoring and plenty of other issues that plague modern data organisations.
Solving these problems in-house, typically building on open source software and custom glue code, is possible but costly and often slow. Buying from a small vendor, on the other hand, ca
n be risky – subscription prices are volatile, start-ups fail, and even seemingly big players, like Tableau or Looker, are up for grabs for big tech, which can put customer investments at risk. Buying into a third-party hosted open-source solution can be a middle ground, allowing to out-source at least operations.
In this discussion, led by David Christian Blumenthal-Barby, Head of Predictive Analytics & ML at language learning company Babbel, we will discuss lessons learned, wounds licked, victories achieved and best practices around "build vs. buy" in Analytics, ML and Data Science.
We will cover the following questions:
- How to conduct due diligence on a small vendor?
- When is the speed gained worth the risk?
- How to set mitigate the risk of a vendor failing when setting up the data architecture?
- On the "build" side, how engineering heavy should a self-reliant data organisation be?
- How to avoid a big mess of every team or every data scientist hacking along as they please?
- How technical can we expect data scientists or analysts to be when they navigate a home grown stack?
Gain invaluable insights and practical use cases to help you save time and money learning from the group’s collective experiences, good and bad.
State of the art ML is often considered a black box that does not teach you anything about your data. BI and Analytics tools on the other hand provide intuitive, often visual access to data, but they are pretty dumb. They leave it to the analysts to find what to look at, and they hardly go beyond simple regression or time series models if they provide algorithms at all.
Recent advances in Explainable AI, such as LIME on DNNs, or SHAP scores on Gradient Boosting models, have the potential to change this. They allow data scientists to peek inside the black box, and thus use state of the art ML not only for prediction but for generating insights and hypotheses. However, these methods often require knowledge and technical skills that puts them out of reach for business focused analysts. Also, many of them are pitched as debugging tools for ML engineers, underselling their value for generating insights.
In the discussion, David Christian Blumenthal-Barby of Babbel, will lead a conversation on how advanced ML is used in Analytics; which methods have participants used or evaluated? Was it successful? What are the challenges interpreting the results – and communicating them to the business? How is the tooling?
On a more general level, we want to discuss how ML influences Analytics in the future. Does it still make sense to draw a line between BI and Data Science, between Analytics and ML? How does this affect job profiles and the search for talent? How can BI tools stay accessible while going beyond grouping data and drawing pictures?
Join us for a fascinating conversation about how ML is changing the life of the data analyst.
In data science we are dealing with uncertainty and hence probability every day. Too often in Analytics we report naive point estimates as the business is uninterested in hearing anything beyond a single simple number.
Dealing with probability, we can choose between the Frequentist and the Bayesian approaches. The former is well established, but certain elements of it have come under fire in the sciences. Critics also argue that concepts like confidence intervals and p-values are too remote from the business' way of thinking to be useful.
The Bayesian camp claims that their approach, based on priors, posteriors and credible intervals, is both rigorous and much more intuitive for stakeholders. However, Bayesian analysis quickly runs into technical and computational issues that requires highly skilled and hard to find specialists to solve.
In the discussion, we want to share how delegates deal with statistics and uncertainty in their organisations. We will discuss:
- Technology and methodology – Frequentist, Bayesian, or a mix?
- Standardised protocols, or go as you please for the data scientists?
- Automation – where does it make sense?
Equally important we will address problems and strategies for communicating results to the business, as well as how methodology and communication issues are related – do we teach statistics to our stakeholders, or do we simplify our findings, hiding uncertainty and complexity? Does a Bayesian approach help?
Time permitting, we will also touch on challenges that arise with self-service BI, where everyone in the company draws their own conclusions from pretty diagrams, often unaware of any statistical mess this may bring about.
Organisations successful with Conversion Rate Optimisation (CRO) apply a structured, continuous and iterative methodology to their testing and targeting programme. Is that your case?
Sigi Bessesen, Head of Optimisation at HSBC Bank, shares his experience having helped multiple companies through the various stages of optimisation programmes. In this discussion we will explore themes around the following questions:
- What are your CRO aspirations?
- Are you following any kind of CRO framework?
- What does a great CRO framework look like?
- Is there a single best practice scenario for CRO?
Whether you are about to kick off your optimisation programme, you have been doing it for a while and now hitting some stumbling blocks or you are looking to take your optimisation to the next level – this would be the discussion for you.
As in analytics, Machine Learning (ML) is quickly becoming an established part of optimisation technology. But this welcomed tech addition also poses us challenges, particularly in terms of required changes to our optimisation framework and team roles.
HSBC has a well-established optimisation programme. Yet as many other organisations, the bank is in a continuous process of maximising the value of ML in optimisation. In this discussion, led by HSBC’s Head of Optimisation, Sigi Bessesen, we will:
- discuss the differences between conventional split testing and an ‘always-on’ approach
- Ask how should the optimisation team organise itself for success (processes and personnel)?
- Explore how to measure activity and attribution effectively
You will walk away from this discussion with a clear view of how your peers are using ML in optimisation and with inspiration and awareness of the dos and don’ts.
Throughout the previous decade organisations became increasingly aware of the benefits of split testing. Most companies dipped their toes into optimisation with an outsourced agency model whilst looking to evaluate the potential ROI of optimisation. As optimisation practices mature, companies are increasingly shifting towards an in-house optimisation model. But is that the right approach for every business?
Whether you opt for in-house or agency, they both come with pros and cons. In this discussion we will debate the pros and cons and look to answer the challenges around structure, people, stakeholders and business in optimisation:
- Where is optimisation placed in your organisation?
- When does one organisational model trump the other?
- Is there room for a hybrid solution?
- How do you ensure quality of output under each model?
The main purpose of this discussion is to give you a clear view of what you need to do to ensure optimisation delivers for your organisation - whatever model you choose.
Global privacy regulations are evolving with a myriad of different rules in different jurisdictions. Further evolutions of browser privacy settings - first with the rise of ITP and soon with the demise of 3rd party cookies – are fundamentally changing the digital marketing and analytics landscape. Management is looking to us to provide clarity and more importantly – solutions.
Join Tom Betts, Chief Data Officer at the Financial Times to discuss:
- The current and upcoming changes in regulation
- Are regulators or big tech making the largest impact?
- What changes are required both in terms of tech and processes?
- How you can get ahead by preparing for these changes?
Come away from this discussions with a clear understanding of the regulatory threats and opportunities for your analytics practice and with practical options for the most pressing challenges.
In the digital analytics that supports many industry sectors we have long been obsessed with measuring 'success' - click-through rates, or better, conversion rate improvements. The digital analytics tool stack gave us the opportunity to test and optimise sales funnels, giving rise to the field of Conversion Rate Optimisation.
But what if you want to go beyond measuring one-time conversions and take a longer-term view? How do you evolve your tool stack and analytical approach to optimise for Lifetime Value? How do you optimise for engagement? Retention? Repeat purchase? Annual revenues? Upsell?
Join Tom Betts, Chief Data Officer at the Financial Times to discuss how to grow the longer-term value of customer relationships and optimise for LTV.
Today’s consumers are demanding increasingly personalised, relevant communications from the brands they use. The rapid growth in customer data and technological progress make it possible for marketers to achieve true personalisation at scale for the first time, by harnessing the power of machine learning to match the right message to the right customer, in the right context, at the right time.
OTTO has invested heavily into resources for a test & learn approach in personalisation, trying different approaches and methods customizing the user journey offsite and onsite to the specific needs of consumers.
In this huddle, Steffen Methling, Senior Marketing Analyst at OTTO, will discuss progress, learnings and challenges using user data to personalise content decisions. We will investigate the types of algorithms and data that is used to come up with the best message for each customer. We will talk about tactics, challenges, revelations and dead-end streets in automated data-driven marketing decision making.
Make this your personal journey to a leap ahead in user experience.
The customer journey has been a catchphrase in digital marketing and ecommerce for many years. Optimising the path through different customer touch points is the end goal of the effort to understand these journeys. The toolset for analysing customer journeys has always been wedged between overly simplistic views on journeys with one-way funnels and overly complicated methods such as multi-layered Sankey flows where it is extremely difficult for humans to extract actionable insight.
In this session, led by Steffen Methling of OTTO, we will explore what has worked in the past and whether there are any new methods fuelled by advances in machine learning to extract more meaningful information from user behaviour across touch points not easily fitting into classic AIDA funnels.
Come share your customer journey analytics experience and together we will emerge with tangible outcomes you could apply at your organisation following the conference.
GDPR is shaping our use of personal data in marketing and digital commerce but has had relative little impact on the availability of data for personalisation and targeted marketing. ePrivacy legislation could change that radically, making the utilisation of user data to personalise the customer journey consent reliant.
This huddle led by Steffen Methling, Senior Marketing Analyst at OTTO, will focus on marketing strategies that can help us get user consent:
- Do we need incentives to collect user opt-ins?
- What role can app marketing play?
- What messages could persuade users to allow the use of data to personalise their experience?
- Do we need to coordinate our consent offer in a consolidated marketing programme or do we simply integrate it in daily marketing business?
- How do we set up and coordinate such an effort?
Join us in this factory of ideas to better cope with ePrivacy requirements and achieve greater user consent to our targeting initiatives.
A data strategy is a must for any company these days. It helps ensure that data is managed and used like an asset. But data sources are constantly changing creating a challenge to maintaining high standards of data quality, transparency and accessibility. At the same time, that change, is potentially the number one advantage for business growth in a competitive environment. So how can the applied dimensions of a data strategy look like and how do we get the most out of them?
In this discussion, Tim-Fabien Pohlmann, Director of Marketing BI at HelloFresh will explore how to devise a great data strategy, implement it and ensure the organisation adheres to it. The aim is to have relevant data, which is fully governed, available, accurate and accessible in a timely manner when we need to query it.
Questions we will answer include:
- Why do organisations struggle to maintain useful and accurate data?
- What are the successful use cases and what are the key elements supporting them?
- How do we ensure measurement is considered up front in every new initiative and where should accountability sit?
- How to maintain consistency across products and business units?
- How do we ensure data governance is prioritised, when competing with other initiatives which might appear to have more immediate customer benefit?
- How to implement monitoring, alerting and anomaly detection to prevent data loss/corruption?
If you strive to build trust in an elaborate data environment – this session is for you.
How to build a truly effective analytics programme in a fast paced volatile, uncertain and complex environment? How to establish a strong team with a high degree of diversity from different domains and cultural backgrounds? At the same time, the team must develop and maintain a high level of credibility with other parts of the organisation in order to be (a) heard and (b) given the necessary funds and support to implement and run the programme. These are challenges faced daily by Tim-Fabien Pohlmann, Director of Marketing BI at HelloFresh. In this discussion, he will encourage delegates to share their views on and experiences in designing an organisational structure that supports their business mission. We will discuss how to structure the programme and develop it over time; ask what skills and roles are essential from day one and which could be added as the programme evolves? And how should the analytics team be integrated into the larger organisation?
We will discuss how analytics managers can excel in this uncertain, complex and ever changing environment:
- Cultural diversity in growing teams
- How to pivot your team from small and multi-functional to large and specialist-focused?
- What process adjustments are required as your team expands?
- How best to nurture talent in an increasingly divergent team?
We invite both analysts and managers to join the conversation which will produce new insights practical for all.
Attribution modelling and customer journey analysis are two of the most important topics for online marketers these days. A well-designed attribution model can lead to huge savings in marketing spend and/or a significant increase in marketing effectiveness.
However, the multi-touch omni-channel reality of our days means attribution modelling is one of the most challenging and consequently controversial domains in analytics. Some even argue it is impossible. Yet we do not have the luxury to simply give up.
In order to challenge the knowledge in the attribution domain Tim-Fabien Pohlmann, Director of Marketing BI at HelloFresh will discuss how to build the least worse multi-channel attribution models. We will cover:
- How to integrate offline and online marketing attribution?
- What would the perfect multi-channel attribution model look like?
- Testing and measuring the efficiency of different attribution models
- Existing methodologies used for multi-channel attribution
- Best practices to track cross-device users
Expect different views and approaches to one of the most fundamental analytics challenges for marketing analysts.
Forrester research has studied insights-driven businesses that are gaining market differentiation with synced-up data & analytics approaches which deliver coordinated decisions, actions and experience at scale. By 2021 these businesses will together generate $1.8 billion/yr revenues.
In this session, led by James McCormick, Principal Analyst at Forrester, we will explore the digital analytics and intelligence frameworks and practices learnt from these insights-driven businesses. We will discuss the confused landscape of digital data, analytics and experience optimization technologies including:
- The landscape of digital intelligence capabilities and how the tech fits together
- Major technology subcategories and their leading vendors: e.g. web analytics, mobile analytics, CDPs etc.
- Strategies for building advanced digital intelligence practices
Join James for an exciting discussion about maximising the value of your organizational digital analytics strategy.
The term “Customer Data Platform” (CDP) has recently launched itself into the digital data technology zeitgeist. Most pundits are clear that there is additional value in this new approach to managing and processing customer data. But there is mass confusion as to what is and indeed what CDPs can do. So many different types of vendors are chasing the term - with many different use-cases. In this session we will explore:
- What CDPs are?
- How do CDPs advance digital customer analytics?
- Which vendors claim to offer the tech, and how mature their offerings really are?
Led by James McCormick of Forrester, we will draw upon the group’s collective experience to map out a clear view of the CDP landscape and successful use cases.