Our team is building a data service for Parliament and having worked in research for five years, a job in the public sector was very appealing to me. Especially as I now have the opportunity to make a difference to people’s lives.
I came across the PDS data analyst job and the skills required seem to fit me quite well so I applied and got the job. I joined the data and search team last July, having come from an academic background.
Data, data, data
My role as a data analyst is to support the development of the data services by analysing a wide variety of complex data from various sources, and making suggestions and recommendations for improvements.
Another part of my role is to help other teams in Parliament with their data. This involves modelling and statistics, creating reports, data mining, and management. We're currently working with the Indexing and Data Management Section (IDMS) of the Commons Library and they're responsible for cataloguing Parliamentary material.
We're helping them to better understand their indexing timescales in relation to the type and volume of material they receive. We've worked with the developers in our team to set up a process for extracting information directly from Apache Solr.
Apache Solr is the search platform that's used to query all indexed parliamentary material, and sits behind the search material service. We can now connect to this data to get a better understanding of it before we decide how to summarise it. This provides a data-driven solution to information that IDMS keep track of manually at the moment.
What it’s like
I'm encouraged to explore different analytical methods and software tools in my day to day work. During my eight months at PDS, I've worked with a vast range of complex data and I’m picking up new skills all the time. So far, I've learnt how to use Google BigQuery, Google Analytics, Application Insights, and tools like R and Power BI.
One of the most interesting things I've worked on is assessing the performance of the APIs that drive the beta website. Monitoring our APIs is important because we want to provide a good user experience, detect performance issues, and also monitor usage.
For example, looking through the telemetry data from our photo API led to changes on the tracking code of the beta website to allow for the identification of internal (the beta website) and external API calls.
I also began exploring the use of clustering techniques to define groups of users based on their journeys on the beta website. This was a good way to learn how to use BigQuery and become more familiar with SQL and R.
What I like about it
I get to learn concepts and technology completely new to me such as graph databases, semantic data, and information architecture. The great thing about being part of the data and search team is that I get to be involved in the development of the new data service.
Read more about the work of the data and search team.