https://pds.blog.parliament.uk/2016/07/19/work-experience-with-website-alpha/

Work experience with website alpha

In this blog post I’ll talk to you all about what I’ve learnt during my second week at Parliament. First things first, I’m Matthew Bwye Lera, I’m 18 years old and I’m from Spain; I’ve come to England to start my 3 week work experience here.

Website Alpha team

During my second week I’ve mostly been working with the Website Alpha team, and mainly with the two developers from the team: Giuseppe and Rebecca. I’ve seen a lot of new things regarding programming, and I’ve also been able to test my code reading skills by following what they were doing.

I got to see mostly the SPARQL query, the Ruby script language and the Python language. Ruby and Python are two coding languages, both had to be used alongside Json (which I spoke about in the previous blog) to make the website. SPARQL supported these other languages; Giuseppe and Rebecca used it to import codes that were related to certain variables such as people, their ID number, etc. These variables were subdivided into subject, predicate and object, which compose the basic structure of the linked data. This is an example of linked data, and this is what the meaning of subject, predicate and object is. So basically, Giuseppe and Rebecca named certain parameters as subject, predicate and object and then used them in their Ruby and Python codes through SPARQL.

This is part of the domain modeling the team managed to make from the linked data:

Model (1)

As stated above, they used linked data, but apparently most websites use data in a different way: relational database. This one works on a basic relation of variables such as a person and their Twitter account. This is a model of it. Even though it isn’t used here, I still think it’s important to learn about the relational database.

Samu, Steve, Vinitha and many other people would also work alongside us, and I would learn from them. What I mainly got to see is how different teams work together and link their work; even though the teams work separately, what they do is then put their conclusions together in one job at the end of the day.

From what I could see, this team was especially “agile.” Every day, at 10, the team would do what is known as a “stand up”; this is one of the “ceremonies”. A stand up is nothing more than a 15 minute talk about what was done the previous day and what the plans are for the day.  I got to attend some of their agile meetings or ceremonies.  Sprints are a working method or a time boxed amount of time that focuses on getting planned tasks done. The website team works in sprints of two weeks, at the beginning there is a planning ceremony where the team discussed what needs to be done and in what order. During the daily stand ups these objectives may be moved or ticked as completed.

I was lucky enough to be at a sprint planning and experience their retrospective. In retrospectives, the team members discuss what things have helped the team improve, what has stopped it from making good progress and some questions or suggestions that could help improving the team.  The retrospective was unbalanced by the positive opinions such as: having experts of parliament working with them (e.g., Katya who works with the team, actually has a job at the Journal office in Parliament) which help the team choose what should be on the website and what shouldn’t, introduction of new tools such as Python, good pacing, etc. Still, there were several problems to action and take forward in the next sprint. After the retrospective, the planning commenced and the team had to decide what were the objectives of the next sprint, how to start the sprint and what they were going to let everyone else know in their next show and tell.

The team had done a show and tell a few hours before the sprint planning, so using the feedback they got then helped them organize their next one. I attended two website show and tells, the usual slot every two weeks and the one to the whole of the Digital Service.

Continuous delivery pipeline team

I also spent time with the Continuous delivery pipeline team. I already knew most of the team, and it was great working with them. In a nutshell, what this team had to do was deliver what the developers have created. They taught me a great number of things, so let’s go step by step.

I learnt what a container was. Basically, a container is a fragment of a virtual server, and a virtual server is part of a server. These are all spaces to keep files or apps that take a certain amount of capacity. The difference is that unlike servers and virtual servers, containers are more flexible, which means that you can change its capacity to however you want it to be so it has the exact amount that you’ll need for your app. Also, two containers that belong to one server can have different operational systems, which is quite practical. There are a few more things I learnt about containers. Images are containers that aren’t running or being used at the time, the volume is what files the container has loaded from a host (this means that containers don’t actually keep files, but rather load them from somewhere else, which makes them even more capacity efficient) and the label is basically the metadata of the container; its information, which is also used as a reference for the other containers.

Steve, James and Vinitha explained how they delivered what was done by the developers. The developers make code and host it onto a website called GitHub. While they’re hosted they can still be edited, and when they receive any changes, the pipeline GoCD checks if the code is still working and is deployable. Once the code is ready to be deployed, the container will be moved to Docker Hub, where the images (containers not running) are stored, and when the container is running, it’s moved to the swarm manager, and eventually to one or multiple swarm nodes. In the end, it’ll be delivered to the parliament.uk website. The whole process is summarized in this image:

CDP

The last thing I learnt from them was about the logs. From what I could see, logs are gathered in a logstash, then they’re pushed to the elastic search and finally it’s moved to the databases. There’s a tool called Kibana that is able to read the logs, letting the user know where they come from, where they’re going, how long it takes them to be sent to the database, etc. This tool can be used to spot errors during the process. This whole thing is called ELK (Elastic search/Logstash/Kibana).

Other events

Independently of who I was working with at the time, I also got to attend to other events, such as a show and tell of the Data and Search team and a CI (continuous improvement) event.

The show and tell could be divided into two parts: Matt’s speech and Dan’s speech. I worked with both of them last week, which meant I had already seen most of what they showed. Still, I liked reviewing my learning, and seeing it from a different view. Matt explained that his team is relatively new, and that they have the task to manage data. To be more precise, they manage people data, financial data, organizational data, space data, localization and telephone data and asset data. Out of these, the financial and the asset data were mainly translate and transfer, unlike the rest. Space and localization data aren’t to be mixed up; space data refers to the physical space, like the distance between the walls of a room, and localization data refers to the place that area takes in the maps of the building, for example, the number of a room. After talking a bit about this, he explained that they relied on a tool called Biztalk to manage data. He also mentioned that data doesn’t always arrive successfully, and solving that issue is part of their job. Dan’s speech was mostly about what had been discussed during the previous week’s meeting. He spoke about the objectives his team wanted to achieve, the achievements they already reached and the way it was organized (data modelling is Michael’s duty, Samu is in charge of the Data platform and Robert has to make sure that data can be found).

In the CI event four presentations were shown. There was one about the libraries of the Houses of Lords and Commons, another one about Rapid start, which was presented by Julie and Ed from the Digital Service, and then Step up which was implemented at London City airport. I was expecting it to be IT related, so I wasn’t really able to follow these presentations as easily (except for the Rapid start presentation). The event’s goal was to see how the different continuous improve workshops can help make improvements to processes in teams and projects. Rapid start explained the working method “agile”, which really inspired people there, alongside the London airport presentation.

Summary

Overall this week was really good! Since I got to focus more on one team, I got to see in greater depth what they had to do and how they did it. I already knew most of the people I worked with, which was also helpful to understand what they did and to ask any questions.

Once again, I would like to thank some people before ending this blog; in addition to those who I thanked last week, such as Colin (my uncle) and Julie for having helped me to get to do this work experience, I must also thank this time Giuseppe and Rebecca for having spent so much time with me and taught me so much regarding programming. In fact, Giuseppe showed me a couple of sites that could help getting me into learning how to code, I do appreciate that an awful lot. Of course, I must thank the rest of the Website Alpha team, even though I spent most of my time with Giuseppe and Rebecca, they’ve also taught me a lot.  Thank you also to the continuous delivery pipeline team who spent a lot of time and effort helping me in my work experience.

 

 

Leave a comment