Matthew Dawkins https://www.matthewdawkins.co.uk Matthew Dawkins. Web developer, musician, Christian, father, geek. Not necessarily in that order. Wed, 26 Sep 2018 13:04:11 +0000 en-US hourly 1 https://www.matthewdawkins.co.uk/wp-content/uploads/2017/01/md-favicon-150x150.png Matthew Dawkins https://www.matthewdawkins.co.uk 32 32 Google Analytics, EU Cookie Law and GDPR https://www.matthewdawkins.co.uk/2018/09/google-analytics-eu-cookie-law-and-gdpr/ https://www.matthewdawkins.co.uk/2018/09/google-analytics-eu-cookie-law-and-gdpr/#respond Wed, 26 Sep 2018 13:02:36 +0000 https://www.matthewdawkins.co.uk/?p=2318 Key question: do I need to ask permission from website visitors before using Google Analytics? This is a web developer’s guide, describing what GA is, how it relates to the law, and suggests some potential implementations. How Google Analytics works GA can be installed on a website in one of […]

The post Google Analytics, EU Cookie Law and GDPR appeared first on Matthew Dawkins.

]]>
Key question: do I need to ask permission from website visitors before using Google Analytics? This is a web developer’s guide, describing what GA is, how it relates to the law, and suggests some potential implementations.

How Google Analytics works

GA can be installed on a website in one of two ways – either by directly including analytics.js on the site, or by using Google Tag Manager.  In either case, GA is activated, and the following happens:

  • GA’s Javascript code is injected into the site, loading analytics.js from Google’s server, referencing the GA property we have set up
  • A series of cookies are stored on the browser
  • A tracking pixel is requested for every page view or event

The cookies provide persistence – once set, they remain the same throughout the visit, so that GA can ‘track’ your behaviour across multiple pages.

The tracking pixels are never actually visible on the website, but are loaded via an HTTP request as if they were an image hosted by Google.  The cookies in your browser are sent with each request, which is how Google gets all that information back.

The Javascript code is what ties it all together – managing the cookies, detecting visitors’ activity, and packaging up all the information to send back to Google’s servers.

EU Cookie Law

This came into effect back in May 2011, and was adopted by all EU countries.  Websites were given a full year to make themselves compliant.  In the UK, the Information Commissioners’ Office (ICO) can enforce action, and in exceptional cases website owners can be fined a considerable amount.

The purpose of the EU Cookie Law is primarily to protect our privacy.  It came about in response to a swathe of tracking technologies designed to track visitors activity across different websites, potentially stealing personal information along the way.  The law doesn’t just extend to cookies, but to any mechanism of storing information on a browser.

GDPR

Another EU-wide regulation now in effect is the General Data Protection Regulation, enforced as of May 2018, and which is a thorough revision of 1993’s Data Protection Act.  At its core, its aim is to protect Personally Identifiable Information (PII), giving people more control over their privacy and their data.

PHP sessions

When someone visits a website, PHP can assign that person a session ID, to keep track of any information that needs to persist throughout their visit.  Typically this is what powers any login functionality, where we need to maintain the user’s logged-in status across multiple pages.  To achieve this persistence, PHP stores a cookie in the browser.  It contains no PII, and is harmless, but it is still a cookie.

EU Cookie Law is unclear at this point, but the general consensus is that session cookies do not pose a risk to privacy.  We therefore do not need to ask permission before storing a session cookie, which can be considered “essential” to the normal/expected operation of the website.

Other types of cookie

While GA and session cookies might be considered relatively ‘safe’, others are not.  Take special care if you are using Google Tag Manager, AdSense, HotJar, and other products that store cookies.  You may need to change your approach if you are bundling these in with GA, or you might have to ask for permission for these separately.  At the very least, it is unlikely you can silently imply consent for these.

GA and GDPR

The good news here is that GA does not handle PII at all.  All its cookies and all the information sent back to Google’s servers is intentionally anonymous, and is reported in aggregate, so that users’ privacy is already protected.  There is therefore no special consideration relating to GDPR when using GA.

GA and EU Cookie Law

The guidance given by the EU Cookie Law is that website owners must ask permission before storing cookies.  In short, this means we cannot use GA until consent has been given.

However, guidance and implementation varies widely across the internet, and in reality the EU Cookie Law is not strictly or uniformly enforced.  Many would argue that since GA only uses first-party cookies any data stored in them (which is already anonymous) cannot be accessed by other websites, and therefore already respects visitors’ privacy; this viewpoint advocates the silent use of GA without asking permission first.  Weight is given to this approach because the ICO simply does not have the power or resources to enforce the EU Cookie Law across every single website, and only targets the biggest and worst offenders – smaller websites/companies can essentially ‘get away with it’ by remaining unnoticed.  This does not make them compliant, but some are happy to take the risk.

GA isn’t always private

When setting up a new GA account, you are asked to specify your Data Sharing Settings.  By default, these are all ticked.  If you accept these default settings, data collected from your websites will be shared with others for various purposes.  If we are concerned with our visitors’ privacy, which we should be, these options should be disabled.

Also bear in mind that it is possible for a developer to deliberately send PII to GA.  This may be done with the best of intentions, but it is very clearly in breach of GDPR if explicit consent has not been given.  GA events should not include user IDs, email addresses, names, or any other PII.  When creating sites, we should also ensure that URIs do not include any PII either, as that would show up in the page view data.

How people manage their privacy

Education around internet privacy is improving, but it is far from mature.  Some will happily accept cookies from anywhere and everywhere.  Some will vary their response depending on the website or the reason stated for needing to store cookies.  Some will avoid cookies wherever possible.  It is possible to completely disable cookies in your browser, but given how many sites rely on cookies, especially for managing login sessions, those users will quickly find most of the internet doesn’t work any more.  A worrying trend is that cookie consent notifications are becoming annoying and intrusive, to the point where people accept and discard the notifications just so they can get to the content of the site, without really thinking about what they are agreeing to.

Consent lifetime

How long we retain a visitor’s consent (or otherwise) may vary depending on the context.  At the very least, it should aim to be unintrusive to the visitor, such that once a declaration has been made it is ‘remembered’ so that they are not asked again.  For website owners, we might want to remember that for as long as possible, so that acceptance is eternal and data is guaranteed.  However, best practice means thinking about our visitors first, so it may be appropriate to limit consent to a ‘reasonable’ amount of time, after which the original permission is deemed to be out of date.  How long that time is will depend on the site and the audience; for some it may be a year, for others it may be as little as 30 days.

Negative indications can be treated differently again.  If a visitor declares that they don’t want to accept cookies, the visitor could argue that they should only be asked once; however, this introduces the problem of how we remember that we asked them without storing it in a cookie.  A simplistic solution might be to remember the indication for the duration of the session; when the browser is closed, the indication resets, and the next visit would prompt the visitor again.  A more aggressive solution would be to keep asking on every page view, but this could be considered intrusive.  Another approach might be to record the preference on the server, if a login system is used, which would allow more control over the lifetime, assuming the visitor is logged in.

My recommendations

Today, I primarily create websites for my company’s clients, and therefore it is other companies’ reputations that are at stake.  We have a duty of care to ensure that the websites we create do not put our clients at any risk, and we must therefore strive to be fully compliant with GDPR and EU Cookie Law, regardless of the size of the site.

At the same time, we have a vested interest in showing the performance of our websites, which in some cases is best measured by visitors’ activity.  If we cannot accurately track how people are using our websites, we cannot be sure we are providing good return on investment.  We therefore need to strike a balance between compliance and pragmatism.

We also have a duty of care for the visitors to our clients’ sites.  We need to respect their privacy, even if it makes business more difficult for us and our clients.  We should therefore avoid any tactics that might be considered underhand; our websites should not be misleading, nor should they bully visitors into actions they are not happy with.

Implementation 1: silent GA

If we are confident that GA has been set up to completely anonymous, that Data Sharing is disabled, and that no PII is being sent, we could take the stance that GA is essential to the operation of the website and include it silently, without asking for permission first.  This has the advantage of allowing us to capture every visit to the site, which is very good from the point of view of measuring the effectiveness of the site.

The danger here is lack of visibility.  If we are storing cookies and not giving visitors any choice in the matter, this would be seen as a breach of the EU Cookie Law.

Implementation 2: notification-only GA

A common approach is to include GA automatically, with the aforementioned benefits of capturing everyone, and simply inform visitors that it’s happening.  This is often achieved with a short note in the footer, or a notification bar that can be dismissed.  The messaging usually follows that ‘continued use of the site is an indication of consent’; if the visitor is not happy with our approach, they are free to leave the site.

In this case, the implementation is actually misleading.  At the point of visiting the site, we are already storing cookies and tracking them anonymously via GA, before they have had a chance to accept the terms.  Leaving the site immediately does not undo their initial visit.

This site uses cookies. Read our Privacy Policy for more details.

 

Implementation 3: consent via interaction

As an improvement on the above, we might consider ‘activating’ GA once there has been a positive interaction on the site.  This might include dismissing the notification bar, navigating to another page on the site, or even scrolling down the page.  If we are confident that visitors will be suitably informed of our intentions as soon as the page loads, any subsequent action can be considered an acceptance of those terms.  If the visitor does not accept the terms, they can safely leave the site, and we will not have tracked them.

The risk with this approach is that the positive interactions we are listening to are not necessarily related to giving consent.  Dismissing a cookie notification, for example, could be argued to imply that the visitor doesn’t want us to store cookies.  Similarly, a visitor may want to scroll through the page before deciding whether they want to accept cookies.  Implied consent is unintrusive, but risks assuming consent where there wasn’t any.

GDPR has trained us to think about “explicit consent”, such that implying consent via an unrelated action seems insufficient.  However, GDPR is primarily about the handling of PII, which is (hopefully) out of scope for GA.  This approach is a good compromise – it protects visitors’ privacy, ensures as much data as possible for us to analyse, and shows that we are serious about privacy.

Actually implementing this approach has several implications.  Rather than simply copying and pasting the GA code into the page source, we need to conditionally include it once we have detected a suitable interaction.  A recommended approach would be:

  1. Check for the presence of a GA cookie – if it’s already there, we can assume that consent has already been given, and we can inject the GA code immediately.
  2. Listen for suitable events – these can include navigating to another page on the site, scrolling down the page some distance, clicking a button, dismissing a notification, or entering information in a form. Time on site should not be considered a reliable source of consent.
  3. As soon as an event is detected, inject the GA code and stop listening for further consent events.
  4. Remove the notification, ideally making it clear that consent has been received.
This site uses cookies. By using our site, you are accepting the terms of our Privacy Policy. X

 

Implementation 4: explicit consent

Another approach is to only activate GA if the visitor explicitly agrees to it.  This limits the above approach to only an action relating to giving consent.  Typically this would mean relying on a cookie notification bar or window that asks for consent; clicking an ‘Accept’ button would close the notification and activate GA.  Ignoring the notification would mean the visitor can browse as much of the site as they want without being tracked.

While this sounds ideal, many have taken to making their consent notifications as obtrusive as possible, coercing visitors not to ignore the request and to give consent.  This approach, while not specifically in breach of GDPR or EU Cookie Law, could be considered bad form, and potentially gives visitors an unwanted negative experience.  At the same time, implementing this approach with a more subtle notification means that visitors can ignore it and use the site untracked, which is bad for measuring activity.  This is however generally a good approach if the client is happy to accept lower numbers in GA data.

In terms of implementation, it is similar to the above, except that we are only considering consent if an ‘Accept’ button is clicked on the notification.  Do not include any other way of dismissing the notification, as that would be misleading.  You will also need to continue showing the notification on every page until consent is given.

This site would like to use cookies. Read our Privacy Policy for more details. Accept

 

Implementation 5: explicit consent with rejection

An approach sometimes seen, which is an extension of the above implementation, is to allow visitors to either accept OR decline the use of cookies.  GA cannot be activated until consent is explicitly given, but we additionally allow visitors to permanently (or temporarily) decline consent so that they are not tracked and they are not pestered by the notification any more.  This is excellent from a user’s perspective, as it puts them in complete control.

However, this is clearly a poor option from the point of view of the website owner, who is likely to see very few visitors accepting the request; people are more likely to reject than accept, given the choice.  At this point, the implementation seems overkill given that GA is already anonymous; it may be more applicable if other, less harmless, cookies are also being stored.  It also has the counter-intuitive impact that we need to store the visitors negative response by storing a cookie for the preference.

This site would like to use cookies. Read our Privacy Policy for more details. Accept Decline

 

Implementation 6: explicit consent with options

For a more comprehensive solution, we may consider splitting our cookies into different categories, and letting the visitor choose which ones they want to accept.  In this case we can make it clear in the messaging that GA and session cookies are essential parts of the normal working of the site; we might in this case take the approach of implementation 4 and activate GA as soon as consent is given.  The advantage here is that we are communicating a sense of control to the visitor, reassuring them that GA is ‘safe’ and allowing them choose whether they want anything else.

This is the best solution if you are using cookies other than GA, such as HotJar or AdSense, and is especially good if visitors can easily come back and change their decision later on.  It gives the visitor a sense that we are taking their privacy seriously.  However, it is a much weightier solution, perhaps unsuited to smaller sites where it might seem out of proportion to the significance of the site.

At this scale, it may be more appropriate to use an off-the-shelf implementation rather than rolling your own.  This not only reduces workload for us, but potentially helps instil a sense of trust.

This site uses cookies. Read our Privacy Policy for more details. Indicate below which cookies you are happy for us to use:

[ x ] Essential

[    ] Optional
Accept

Conclusion

Google Analytics is cool. Cookies are useful. The EU Cookie Law is there to protect our privacy. The GDPR is there to protect our data. The hard part is getting all of these to work together. The implementations above are all suggestions, and any one of them might work for you. Or there might be another way I haven’t thought of, in which case let me know in a comment below. Either way, never take consent for granted, always think about your visitors, and cover your back.

The post Google Analytics, EU Cookie Law and GDPR appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/09/google-analytics-eu-cookie-law-and-gdpr/feed/ 0
Interactive SVG map https://www.matthewdawkins.co.uk/2018/07/interactive-svg-map/ https://www.matthewdawkins.co.uk/2018/07/interactive-svg-map/#respond Mon, 23 Jul 2018 10:51:41 +0000 https://www.matthewdawkins.co.uk/?p=2311 I recently needed to present a map on a website so that visitors could choose their region (i.e. a group of countries, which could be arbitrary) before going into the site. After a little playing around, I settled on a solution that uses an SVG map of the world, and used […]

The post Interactive SVG map appeared first on Matthew Dawkins.

]]>
I recently needed to present a map on a website so that visitors could choose their region (i.e. a group of countries, which could be arbitrary) before going into the site. After a little playing around, I settled on a solution that uses an SVG map of the world, and used CSS and Javascript to highlight the region and make it clickable.

The first challenge was how to create an interactive area that wasn’t a box. HTML is great, but it’s fundamentally boxy, even when you round the corners off. With some countries being really small, a box-based approach simply wasn’t going to work. That’s where SVG comes in – it’s a vector image in the browser. I managed to find an open-source SVG of the entire world which suited my purposes. I mean, it’s not a perfect map, but that’s the curse of the Mercator projection.

The important thing about this particular SVG is that it’s marked up with country codes. Since SVG is basically just XML, it can include more than just visual data, and in this case the author has included two-letter country codes in the image code. That’s awesome, because it means we can hook into it with other code! For example, here’s an extract showing the SVG markup for Greece, with the two-letter identifier

gr
 .

<g cc="gr">
  <path d="M506.71,217.6l-0.11,1.33l4.63,2.33l2.21,0.85l-1.16,1.22l-2.58,0.26l-0.37,1.17l0.89,2.01l2.89,1.54l1.26,0.11l0.16-3.45l1.89-2.28l-5.16-6.1l0.68-2.07l1.21-0.05l1.84,1.48l1.16-0.58l0.37-2.07l5.42,0.05l0.21-3.18l-2.26,1.59l-6.63-0.16l-4.31,2.23L506.71,217.6L506.71,217.6z"/>
  <path d="M516.76,230.59l1.63,0.05l0.68,1.01h2.37l1.58-0.58l0.53,0.64l-1.05,1.38l-4.63,0.16l-0.84-1.11l-0.89-0.53L516.76,230.59L516.76,230.59z"/>
</g>

However, note that you need the SVG source in your HTML document for this to work. If it’s not in the DOM, the browser will treat it like an embedded object or an iframe, and you won’t be able to do anything with it. For the project I was working on, I used PHP to include the file contents into my page. An alternative approach, which I used for this demonstration, is to use Javascript to load the image into the page. You can use either approach. Or something entirely different. Whatever floats your boat. Just make sure the SVG markup ends up in your DOM.

$('.map').load('https://raw.githubusercontent.com/benhodgson/markedup-svg-worldmap/master/map.svg');

Once it’s there we can start manipulating it. The first thing I did was play around with the styles to make it look a little nicer for my context. A little CSS targeting the path elements is all you need, but note that unlike regular HTML elements you’re going to want to set the

fill
  and
stroke
  rather than
background
  and
border
 . You can use mouse cursor styles too, and transitions, and undoubtedly a whole lot more besides.

.map {
  max-width: 100%;
  path {
    fill: #c0c0c0;
    stroke: none;
  }
}

Now comes the Javascript. We start off with a list of country codes we want to group together. I started with a few countries in Europe, but you could use anything you like, using the two-letter country codes. Essentially, when we detect that the mouse has entered an SVG path (or group of paths) we check to see if that path (or group) has a country identifier in our region array. If it is, we apply an

.active
  class to it, and use CSS to change its colour.

var region = ['gb', 'fr', 'de', 'es', 'it', 'nl', 'ie', 'be', 'pt', 'ch'];

$('.map').on('mouseenter', 'path, g', function() {
  var thisCountry = $(this).attr('cc');
  if (region.indexOf(thisCountry) > -1) {
    $.each(region, function(index, country) {
      $('.map [cc="' + country + '"]').addClass('active');
    });
  }
}).on('mouseleave', 'path, g', function() {
  $('.map .active').removeClass('active');
});

.active, .active path {
  fill: blue;
}

I then took that a little further with my demo and catered for multiple regions, just for laughs. We’re using the region index to apply a further class, for example

.group1
 . Then in the styles we want to give each group a different colour, so we’re defining an array of colours and using a SASS mixin to generate as many variations as we have colours in our list.

// Define our colour scheme as a list
$colors: cornflowerblue, indianred, forestgreen, gold, lightsalmon;

// Apply colour to the right group
@mixin path-colors {
  @for $i from 1 through length($colors) {
    .active.group#{$i - 1}, .active.group#{$i - 1} path {
      fill: nth($colors, $i);
    }
  }
}

.map {
  @include path-colors();
}

Finally, when we click on a country/region we want to do something with it. For the purpose of this demo, I’m just using the index in the class name to look up a name in another array and showing it as a message, but you could just as easily route that to another action depending on your use-case.

Here’s the final demo showing it all at work.

I hope you find that useful. Let me know if you think there are any improvements I could make to the demo, or if you would have approached the problem a different way!

The post Interactive SVG map appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/07/interactive-svg-map/feed/ 0
More scenery https://www.matthewdawkins.co.uk/2018/05/more-scenery/ https://www.matthewdawkins.co.uk/2018/05/more-scenery/#respond Wed, 02 May 2018 12:28:33 +0000 https://www.matthewdawkins.co.uk/?p=2294 In my latest few trips to the garage I’ve been working some more on various aspects of the scenery. The roads have now been painted in (though still waiting for appropriate white lines), Backwoods Station now has a curvy extension on the end (still needs covering), I’ve created a custom […]

The post More scenery appeared first on Matthew Dawkins.

]]>
In my latest few trips to the garage I’ve been working some more on various aspects of the scenery. The roads have now been painted in (though still waiting for appropriate white lines), Backwoods Station now has a curvy extension on the end (still needs covering), I’ve created a custom backscene, I’ve used matches to create a walkway across the tracks by Frontington Station, and I’ve built a couple of kits I got for my birthday.

The backscenes were an interesting experiment. I spent some time looking online for some suitable landscape images that were free, and managed to find one of the right sort of geography. It was actually quite hard to find a suitable image, with the right perspective and focus and distance from the camera. Then it was just a case of popping it into a DTP program and printing it out. I’ve mounted it on some thin hardboard with watered down PVA; it looked a little bubbly and wrinkly to begin with, but once it dried it went nice and flat again. Some of the edges need re-glueing, but that’s no big problem. I think it really transforms the layout!

  

Backwoods Station was always going to be a challenge. It’s on a curve, and I like running my passenger train with two coaches, so it needs to follow the track around the curve a bit. And because it’s on the outside of a radius 1 curve, it actually ends up quite far from the rolling stock at times, but thankfully you’d never notice that from a typical viewing angle. For the extension itself I’ve used an offcut of the polystyrene board I used for the base. I’ll need to cover it in some paper or card, and I’ll probably cover the plastic platform too while I’m at it so it’s all consistent.

I decided I also needed a way for people to cross the track from Frontington Station to get to the engine shed, so I made a little wooden walkway. It’s a bit rough round the edges, and still needs painting to make it look less ‘new’, but it’ll do the job. It’s matchsticks chopped in half and glued straight onto the track, leaving enough room for wheels to still get past.

Then there are the model kits I got for my birthday, including a water tower and some level crossing gates. They’ve come out quite nicely, I think. I’ve also built a couple of scale cars from plastic kits.

All in all, some good progress on the scenery! I’ve also started making a concrete floor for the engine shed using some thick cardboard, and I’ll take some photos of that once it’s installed. I also need to take the plunge and do some ballasting. And, in a final bit of news, I’ve just bought myself a new loco – a BR black J72 tank engine by Bachmann! I’m looking forward to that arriving, and I’ll be sure to take some photos of that too.

The post More scenery appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/05/more-scenery/feed/ 0
Welcoming Jesus https://www.matthewdawkins.co.uk/2018/03/welcoming-jesus/ https://www.matthewdawkins.co.uk/2018/03/welcoming-jesus/#respond Mon, 26 Mar 2018 13:34:04 +0000 https://www.matthewdawkins.co.uk/?p=2290 Whether it’s Palm Sunday or not, we need to welcome Jesus into our lives. Mark 14 shows us 3 ways we can do this: through sacrificial worship, like the woman who poured perfume on Jesus’ head; through submissive worship, like the disciples who trusted Jesus’ words; and through sacramental worship, […]

The post Welcoming Jesus appeared first on Matthew Dawkins.

]]>
Whether it’s Palm Sunday or not, we need to welcome Jesus into our lives. Mark 14 shows us 3 ways we can do this: through sacrificial worship, like the woman who poured perfume on Jesus’ head; through submissive worship, like the disciples who trusted Jesus’ words; and through sacramental worship, as seen at the Last Supper where Jesus instituted a lasting reminder of his grace.

This sermon was preached at the Palm Sunday service at St Aldhelm’s Doulting.

The post Welcoming Jesus appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/03/welcoming-jesus/feed/ 0
LocoSound – sound effects for DC model railways https://www.matthewdawkins.co.uk/2018/03/locosound-sound-effects-for-dc-model-railways/ https://www.matthewdawkins.co.uk/2018/03/locosound-sound-effects-for-dc-model-railways/#respond Fri, 23 Mar 2018 13:22:06 +0000 https://www.matthewdawkins.co.uk/?p=2282 LocoSound is now available for mobile devices (no installation required) to provide sound effects for DC model railways. We all know there are plenty of options for adding realistic sound to a DCC layout, but for those of us using traditional analogue DC technology our options have typically been slim […]

The post LocoSound – sound effects for DC model railways appeared first on Matthew Dawkins.

]]>

LocoSound is now available for mobile devices (no installation required) to provide sound effects for DC model railways.

We all know there are plenty of options for adding realistic sound to a DCC layout, but for those of us using traditional analogue DC technology our options have typically been slim to nonexistent. That’s where LocoSound comes in. It’s a simple web app that runs in your browser, offering realistic(ish) sound effects coupled to simple but effective on-screen controls.

Launch LocoSound Now

Obviously there is no connection between your device and your DC controller, so you’ll need to manually control both at the same time to achieve realism. It may also be worth connecting your device to a speaker mounted somewhere around your layout for better sound quality and position.

Environment

Some ambient environment sounds are available, which play on loop. Currently available is an English countryside sound and some general conversational chatter that would well for a small station platform. More may be added in time.

Steam loco

A recording of an A1X Terrier has been used to create a series of sounds suitable for a small tank engine.

Diesel loco

A simple diesel engine sound is included.

The post LocoSound – sound effects for DC model railways appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/03/locosound-sound-effects-for-dc-model-railways/feed/ 0
Logging with Laravel 5.6 and Loggly https://www.matthewdawkins.co.uk/2018/03/logging-with-laravel-5-6-and-loggly/ https://www.matthewdawkins.co.uk/2018/03/logging-with-laravel-5-6-and-loggly/#respond Wed, 21 Mar 2018 15:10:26 +0000 https://www.matthewdawkins.co.uk/?p=2278 When creating a website, it’s important to know what’s working and (even more importantly) what isn’t. That’s where logging comes in. Yes, logging, that oft-forgotten art. But, as Guillaume at Logmatic quite rightly points out, “PHP logs in particular are NOT JUST ABOUT ERRORS”. That’s his capitalisation there. He’s stressing the […]

The post Logging with Laravel 5.6 and Loggly appeared first on Matthew Dawkins.

]]>
When creating a website, it’s important to know what’s working and (even more importantly) what isn’t. That’s where logging comes in. Yes, logging, that oft-forgotten art. But, as Guillaume at Logmatic quite rightly points out, “PHP logs in particular are NOT JUST ABOUT ERRORS”. That’s his capitalisation there. He’s stressing the point, and for good reason. Logs can be useful for knowing what IS working too. I took inspiration from this post and got Laravel 5.6 integrated with another log monitoring platform (sorry Logmatic, not you) called Loggly.

Objectives

  • Use Laravel 5.6 and don’t mess with the core code.
  • Use Laravel’s built-in logging functionality.
  • Capture errors and information logs and store them locally in a file.
  • Also send all logs to a free monitoring service for better interrogation.
  • Ensure local log files are cleared out routinely.
  • Ensure the app can make me a cup of tea.

I don’t think that last objective will be satisfied, but hopefully we’ll do okay on the rest.

Why Loggly?

First and foremost, it’s free. At least for the amount of usage I’m anticipating for this app. There are other platforms out there, and some might even be better. Sentry looked very good too, although that’s geared up more errors rather than general logging. Loggly has a pretty simple interface for seeing what errors and logs are coming in from your app, and there is already a driver for Loggly baked into the Monolog package, which is what Laravel uses. So all smiles here.

Why Laravel 5.6?

Because it’s awesome. More specifically, though, I’m writing this up because the other tutorials I’ve found (including the documentation on Loggly’s own site, at the time of writing) only apply to Laravel 5.5 and below. I used their code and my app spat out an error at me. And no, Loggly did not log that error. So if you’re using Laravel 5.6 and you’re getting an error with the

configureMonologUsing()
  function, you need to take note of Laravel 5.6’s upgrade guide, and follow my instructions below.

Composer

First, let’s make sure we’re using the most up to date version of Monolog.

composer require monolog/monolog

There, that was easy.

Integrating Loggly

Now let’s open up /config/services.php, and add the following to the bottom of the array:

<?php

return [
  // ...

  'loggly' => [
    'key' => env('LOGGLY_KEY'),
    'tag' => str_replace(' ', '_', env('APP_NAME') . '_' . env('APP_ENV')),
  ]
];

We’re replacing any spaces with underscores so that Loggly accepts it properly. You’ll also notice there are a couple of environment parameters there, so let’s add those to our .env file:

LOGGLY_KEY=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Obviously you’ll need your own key. If you haven’t done so already, register for your free Loggly account, log in, go to Source Setup > Customer Tokens, and grab your customer token (create a new one if there isn’t one already there). That’s your key, so pop it in your .env file.

The ‘tag’ in the array above is how Loggly will identify your app. You can include various different sources in the same account if you want to, and the tag is what identifies where the log has come from. We’re including a reference to the environment to differentiate my local tests from production.

Next we need to tell Laravel how it’s going to get the logs into Loggly, and this is where the process differs compared to Laravel 5.5. Rather than calling

$app->configureMonologUsing()
 , we need to set up a custom logger class. To be honest, while it looks like more work, I prefer the design pattern going on here. We need to create a new PHP class at /app/Logging/LogglyLogger.php, which will look like this:

<?php

namespace App\Logging;

use Monolog\Handler\LogglyHandler;
use Monolog\Logger;

class LogglyLogger {

    public function __invoke($config) {
        $logger = new Logger(env('APP_NAME'));
        $logger->pushHandler(new LogglyHandler(env('LOGGLY_KEY') . '/tag/' . config('services.loggly.tag'), Logger::INFO ));
        return $logger;
    }

}

This tells Laravel that logs with a level higher than INFO (which is basically everything) need to be handled by Loggly. Note that we’re also setting a tag using the settings we defined in config/services.php. Now all we need to do is tell Laravel to use it. Let’s open up /config/logging.php, and add the following to the array:

<?php

return [
  'default' => env('LOG_CHANNEL', 'stack'),

  'channels' => [
    'stack' => [
      'driver' => 'stack',
      'channels' => ['custom', 'daily'],
    ],

    // ...
    
    'custom' => [
      'driver' => 'custom',
      'via' => App\Logging\LogglyLogger::class,
    ]
  ],
];

The ‘stack’ channel is useful because it means we can route our logs to more than one channel, which is exactly what we want in this case because we want them saved locally as well as sent to Loggly. I’ve used

'channels' => ['custom', 'daily']
 . You’ll see I’ve also linked up that custom class we just created.

All being well, that should be all you need to do. Fire up your app and generate an error, and it should appear in Loggly. You should also find a log file in /storage/logs/ with exactly the same error.

Moar logs

As I said (or did I quote?) at the beginning, logs are for more than just errors. So remember that you can (and should) include logging of useful information elsewhere in your app. For example, you might want to log successful user logins, failed logins, or indeed any other interaction. Think about it this way – if someone came to you and described a problem they were having and you needed to trace it back, what information would you need? Log it.

Log::info('User login successful', ['user_id' => $user->id]);

GDPR

An important consideration is that logs may contain Personally Identifiable Information. As per the GDPR regulations coming into force in May 2018, it’s worth making sure we treat these logs with respect and care.

One tenet of GDPR is ‘don’t store PII unless you need it’. So don’t log absolutely everything. Anonymise the log if you can. You’ll see in my example above I’m including the user ID but not the actual login details. Never store passwords. A good rule of thumb is to think about how embarrassing or costly it would be if someone else got hold of your logs.

Another piece of advice is ‘don’t keep PII longer than absolutely necessary’. How you define ‘absolutely necessary’ is up for debate, and probably depends on your app and the contents and context of the log itself. I’ve set up my app to store each day’s logs in its own local file, rather than dumping them all in one massive file. The Monolog library comes with its own log retention mechanism, so we can simply add the following line into our /config/app.php file and logs will only be kept for 5 days:

'log_max_files' => 5,

Loggly themselves are also keen to reassure us that GDPR is important to them too. They currently have a statement on their privacy policy about their commitment to GDPR. It doesn’t really give us much detail, but be assured that your data should be safe, and handled properly, and if you need to ask them specific questions about how and where your data is stored they’ll be happy to help.

Summary

Laravel is awesome. Loggly is awesome. Integrating the two is relatively straightforward. You can (and should) log user events as well as errors. You can keep the GDPR people happy by only logging what you need, and deleting it once it’s no longer useful.

Give it a go, and let me know how you get on, either in the comments below or on Twitter.

The post Logging with Laravel 5.6 and Loggly appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/03/logging-with-laravel-5-6-and-loggly/feed/ 0
WordPress vs. CraftCMS https://www.matthewdawkins.co.uk/2018/03/wordpress-vs-craftcms/ https://www.matthewdawkins.co.uk/2018/03/wordpress-vs-craftcms/#comments Fri, 16 Mar 2018 16:37:37 +0000 https://www.matthewdawkins.co.uk/?p=2270 I’ve known and loved WordPress for years, and I’ve used it personally and professionally for a number of projects. But a friend recently introduced me to CraftCMS, and suggested it might be better. The only way to find out is to try them both out and compare! For the purpose […]

The post WordPress vs. CraftCMS appeared first on Matthew Dawkins.

]]>
I’ve known and loved WordPress for years, and I’ve used it personally and professionally for a number of projects. But a friend recently introduced me to CraftCMS, and suggested it might be better. The only way to find out is to try them both out and compare!

For the purpose for this comparison, I’m creating a simple website that should look and work identical on both platforms. This will allow me to compare like-for-like more easily. The site itself should be pretty simple, in this case a static site with some categorised product pages. I’ll want the content editing to be as customisable as possible, within certain bounds. For ease of development, since design isn’t what we’re comparing, I’ll be using UIKit to build the components. For the sake of this exercise, I’ll be building a site to showcase various geometric shapes.


Installation

As a professional web developer, I like to have all my projects created in a way that makes them easy to update later. I track everything in Git, and use Composer to manage all my dependencies. When setting up our site, I want to ensure I can get up and running quickly and easily, without adding too much of the platform itself into my repo, so that I can easily update it later.

Installing CraftCMS

CraftCMS 3 is a doddle to install, as it’s fully integrated with Composer. A simple command gets everything in place:

composer create-project -s RC craftcms/craft shapemastercraft

There are a couple of extra parameters in here compared to normal, simply because version 3 is currently only in Release Candidate status, but that will change in the coming weeks/months. This command installs CraftCMS and gives me a nice clean directory structure to work with.

Another nice feature is that CraftCMS uses .env files for its environment-specific settings, which is basically the standard now for web projects – you’ll find it used in Laravel too, amongst many other platforms. I can put my database connection details in there, and of course that’s not added to my Git repo, so there’s no conflict with production setup.

I’ve already set up my local server environment to point to the right place, so I can go ahead and navigate to http://dev.shapemastercraftcms.com to load up my site. Immediately I see an error page, but apparently that’s normal because I haven’t set up a template yet. I can however navigate to /admin, which then guides me through the rest of the installation process, which happens to be a breeze. In moments, it’s all set up and I’m logged into the back end.

Installing WordPress

Getting WordPress installed feels like going back in time. A zip file? Seriously? I don’t want the WordPress core in my Git repo, I want to be able to keep it separate. Thankfully, there is a solution that converts WordPress into a Composer dependency (yay!). There are some instructions on http://composer.rarst.net/. I say “instructions”, it’s more of a rough guide, and there is still a lot of trial and error needed to get it working. I ended up referring to a couple of other blogs as well, which filled in some of the gaps. In the end, I created this composer.json file which eventually did the magic install:

{
  "name": "tbt/shapemasterwp",
  "description": "Test project for WordPress stack via Composer",
  "type": "project",
  "repositories": [
    {
      "type": "composer",
      "url": "https://wpackagist.org"
    }
  ],
  "config": {
    "vendor-dir": "wp-content/vendor"
  },
  "require": {
    "composer/installers": "1.5.*",
    "johnpbloch/wordpress": ">=4.9"
  },
  "require-dev": {
  },
  "extra": {
    "wordpress-install-dir": "public/wp",
    "installer-paths": {
      "public/wp-content/plugins/{$name}/": ["type:wordpress-plugin"],
      "public/wp-content/themes/{$name}/": ["type:wordpress-theme"]
    }
  }
}

With this in place I was finally able to run 

composer install
 and have Composer download WordPress into its own directory as a dependency. I also have a public folder, which means I can potentially hide away settings and important files so that they’re not accessible on the web. It also installs themes and plugins in a separate folder, so that updating the WordPress core doesn’t overwrite or delete my work. I have a /public/wp folder that contains the WordPress core, and I’ll never need to change anything in there.

Next, I create /public/index.php:

<?php

define('WP_USE_THEMES', true);
require('./wp/wp-blog-header.php');

And /public/wp-config.php:

<?php

ini_set( 'display_errors', 0 );

// ===================================================
// Load database info and local development parameters
// ===================================================
if ( file_exists( dirname( __FILE__ ) . '/../production-config.php' ) ) {
    define( 'WP_LOCAL_DEV', false );
    include( dirname( __FILE__ ) . '/../production-config.php' );
} else {
    define( 'WP_LOCAL_DEV', true );
    include( dirname( __FILE__ ) . '/../local-config.php' );
}

// ========================
// Custom Content Directory
// ========================
define( 'WP_CONTENT_DIR', dirname( __FILE__ ) . '/wp-content' );
define( 'WP_CONTENT_URL', 'http://' . $_SERVER['HTTP_HOST'] . '/wp-content' );

// ================================================
// You almost certainly do not want to change these
// ================================================
define( 'DB_CHARSET', 'utf8' );
define( 'DB_COLLATE', '' );

// ================================
// Language
// Leave blank for American English
// ================================
define( 'WPLANG', '' );

// ======================
// Hide errors by default
// ======================
define( 'WP_DEBUG_DISPLAY', false );
define( 'WP_DEBUG', false );

// =========================
// Disable automatic updates
// =========================
define( 'AUTOMATIC_UPDATER_DISABLED', false );

// =======================
// Load WordPress Settings
// =======================
$table_prefix  = 'wp_';

if ( ! defined( 'ABSPATH' ) ) {
    define( 'ABSPATH', dirname( __FILE__ ) . '/wp/' );
}
require_once( ABSPATH . 'wp-settings.php' );

This is a handy setup (which I found online somewhere, and sadly can’t remember where) which allows me to specify an environment file in my project root:

<?php

define( 'DB_NAME', 'shapemasterwp' );
define( 'DB_USER', 'homestead' );
define( 'DB_PASSWORD', 'secret' );
define( 'DB_HOST', 'localhost' );

ini_set( 'display_errors', E_ALL );
define( 'WP_DEBUG_DISPLAY', true );
define( 'WP_DEBUG', true );

define('AUTH_KEY',         '...');
define('SECURE_AUTH_KEY',  '...');
define('LOGGED_IN_KEY',    '...');
define('NONCE_KEY',        '...');
define('AUTH_SALT',        '...');
define('SECURE_AUTH_SALT', '...');
define('LOGGED_IN_SALT',   '...');
define('NONCE_SALT',       '...');

With all this done, I can finally bring up http://dev.shapemasterwp.com/wp/wp-admin in my browser. The WordPress installation process kicks in, and a few moments later I’m logged into the back end.

But before we go any further we need to quickly sort out the site URL. Because we’ve installed the WP core in its own folder, we need to go into the General settings and change the Site Address so that it doesn’t include the trailing ‘/wp’. The WordPress Address retains it. Now I can visit http://dev.shapemasterwp.com and see my website, which, as expected, is just a blank screen, because I haven’t installed a theme yet.

Summary

CraftCMS wins hands down on this bit. With a bit (a.k.a. a lot) of jiggerypokery we can get WordPress installed as a Composer dependency, but it’s not standard and requires manually creating several files. It only needs to be done once, and updates to the WP core can now be done with 

composer update
 , but it’s hardly intuitive. CraftCMS installs in moments, giving me a clean developer-friendly directory structure that can easily be tracked via Git.


Creating a custom theme

Off-the-shelf themes are great if you’re not a developer. But (as you’ll have guessed) I am. I want to make my own from scratch. Nothing fancy on this occasion, just something quick and functional. For now, all I want is a home page where I can put some basic content.

Templating in CraftCMS

This is a game of two halves. One the one hand, you’ve got template files using Twig, neatly and logically arranged in a /templates folder. Sweet. Here’s my /templates/_layouts/master.twig file:

<!doctype html>
<html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <title>{{ siteName }} : {{ pageName|default(entry.title|default('')) }}</title>
        <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/uikit/3.0.0-beta.40/css/uikit.min.css" integrity="sha256-VyWNo3nreq7kl76bp/ETa0Tbq3FVqCd6wCMF49aGP4c=" crossorigin="anonymous" />
        {{ head() }}
    </head>
    <body>

        <nav class="uk-navbar-container uk-margin" uk-navbar>
            <div class="uk-navbar-left">
                <a href="{{ siteUrl }}" class="uk-navbar-item uk-logo">{{ siteName }}</a>
            </div>
            <div class="uk-navbar-right">
                <ul class="uk-navbar-nav">
                    <li><a href="{{ siteUrl }}">Home</a></li>
                </ul>
            </div>
        </nav>

        <div class="uk-container">

            {% block content %}{% endblock %}

        </div>

        <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
        <script src="https://cdnjs.cloudflare.com/ajax/libs/uikit/3.0.0-beta.40/js/uikit.min.js" integrity="sha256-wpeKFfumxNfqAlC4/AkTbuhMaUp72QxUIjEkyFpH1Jc=" crossorigin="anonymous"></script>
        <script src="https://cdnjs.cloudflare.com/ajax/libs/uikit/3.0.0-beta.40/js/uikit-icons.min.js" integrity="sha256-ygOvSgNXVQ3nXNfd5lsn+a6k4THX1tW24aOwm6qMCxI=" crossorigin="anonymous"></script>
        {{ endBody() }}
    </body>
</html>

I’m including UIKit and jQuery from a CDN, just for convenience. There are a few CraftCMS-specific tags available, which allows me to insert things like the site name and URL. I’m also defining one block called ‘content’, which I’ll populate in my home page’s template file, /templates/home-page/_entry.twig:

{% extends '_layouts/master.twig' %}

{% block content %}
    'Hello world!'
{% endblock %}

Beautiful. But not yet connected up to my actual content.

In CraftCMS we have the concepts of ‘sections’, ‘entries’ and ‘fields’. A section is an area of your site, which likely contains one or more entries, and each entry is defined by one or more fields. All of these need setting up via CraftCMS’s settings screens. You cannot do this via code, and it’s all stored in the database. I’ll be honest, this wasn’t exactly intuitive, and took some back-and-forth before I finally had it all set up correctly. To begin with I just set up one text field called ‘body’, which meant I could reference it in my template:

{% extends '_layouts/master.twig' %}

{% block content %}
    {{ entry.body }}
{% endblock %}

Now, finally, I can open up the public home page and see my home page.

Templating in WordPress

The first step is to create myself a basic (and compulsory) stylesheet file in /public/wp-content/themes/shapemaster/style.css. Since I’m pulling in UIKit from a CDN I don’t actually have any need of a stylesheet yet, but it’s how WordPress identifies the theme.

/*
Theme Name: ShapeMaster WP
*/

We also need an index.php file. Best practice tells us to separate out the header and footer of our site so that the common elements only need specifying once. Since WordPress doesn’t have any concept of template inheritance, we’re left with includes. Here’s my header.php file:

<!doctype html>
<html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <title><?php bloginfo('name'); ?> : <?php the_title(); ?></title>
        <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/uikit/3.0.0-beta.40/css/uikit.min.css" integrity="sha256-VyWNo3nreq7kl76bp/ETa0Tbq3FVqCd6wCMF49aGP4c=" crossorigin="anonymous" />
        <?php wp_head(); ?>
    </head>
    <body>

        <nav class="uk-navbar-container uk-margin" uk-navbar>
            <div class="uk-navbar-left">
                <a href="<?php bloginfo('url'); ?>" class="uk-navbar-item uk-logo"><?php bloginfo('name'); ?></a>
            </div>
        </nav>

        <div class="uk-container">

And my footer.php file:

</div>

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/uikit/3.0.0-beta.40/js/uikit.min.js" integrity="sha256-wpeKFfumxNfqAlC4/AkTbuhMaUp72QxUIjEkyFpH1Jc=" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/uikit/3.0.0-beta.40/js/uikit-icons.min.js" integrity="sha256-ygOvSgNXVQ3nXNfd5lsn+a6k4THX1tW24aOwm6qMCxI=" crossorigin="anonymous"></script>
<?php wp_footer(); ?>
</body>
</html>

And finally my index.php file:

<?php get_header(); ?>

<?php if (have_posts()) : the_post(); ?>
<?php the_content(); ?>
<?php endif; ?>

<?php get_footer(); ?>

It’s pretty self-explanatory really, if a little clunky. I can now activate the theme in the WordPress admin, view my site in the browser, and it’ll show my site.

Summary

Both platforms have their strengths and weaknesses when it comes to templating.

WordPress feels like a dinosaur with its old-school file inclusion approach. But it’s easy to get started, and everything in my theme is defined in my codebase, so I know I can track it and deploy it easily.

CraftCMS uses Twig, meaning it can benefit massively from the advanced features of that templating language. It’s code is neater, and will ultimately be more powerful. And it ensures that no dangerous PHP can sneak in and cause unexpected problems. Where CraftCMS falls down (massively, I think) is that half of its template definition is held in the database, and can only be administered visually. Once you know where everything is, it’s pretty easy to add new sections and fields and so on (and those fields are very powerful, as we’ll see shortly), but I can’t add that structure to my Git repo, which means it’s going to be an absolute headache to deploy.

I’d call this a draw. WordPress is simultaneously fairly good and fairly bad. CraftCMS is simultaneously really really good and really really bad.


Product pages

Next up, we want to create some product pages for our shapes. We’ll need a couple of different ranges, each containing an assortment of products that can be managed in the back end. Each product needs to have some engaging content. And of course we’ll need to integrate some navigation so that people can find the products.

Products in CraftCMS

Setting up product pages and range pages in CraftCMS is really no different from setting up a basic page, just a little more fiddly. It all happens in the admin area, so there’s no code I can show you for the setup.

A really nice feature of CraftCMS is its choice of field types. There are the obvious ones, of course, but there’s also one called ‘Matrix’. This allows the content editor to add additional fields in whatever order and quantity as necessary, but only within the options you specify. This means you can keep close control of how the site looks, while also providing flexibility. Combine this with the Redactor plugin (which provides a rich text editor, which sadly doesn’t come bundled with CraftCMS) and you’ve got a whole world of joy that you can pass on to your content editors.

I ended up creating a reusable Twig file for displaying my content areas, which loops through the different field types available in the matrix and displays them nicely. I can use that Twig template in my product pages and also my home page, allowing me to effortlessly integrate image carousels, hero blocks, and more.

{% for item in productContent.all() %}
    {% if item.type == 'hero' %}
        <div class="uk-section uk-section-muted uk-padding uk-text-center">
            <h2 class="uk-heading-hero">{{ item.heroTitle }}</h2>
            <h3 class="uk-heading-primary">{{ item.heroSubtitle }}</h3>
        </div>
    {% elseif item.type == 'description' %}
        <div class="uk-section">
            {{ item.body }}
        </div>
    {% elseif item.type == 'slider' %}
        {% include '_components/_slider.twig' with {'slider': imageSlider} only %}
    {% endif %}
{% endfor %}

There’s also a specific sub-template for creating the slider, using UIKit’s functionality:

<div uk-slider="autoplay: true">
    <ul class="uk-slider-items uk-child-width-1-1">
        {% for item in slider.all() %}
            <li>
                <img src="{{ item.imageUrl }}" alt="">
            </li>
        {% endfor %}
    </ul>

    <ul class="uk-slider-nav uk-dotnav uk-flex-center uk-margin"></ul>
</div>

My minimal product template now looks like this:

{% extends '_layouts/master.twig' %}

{% block content %}
    <div class="uk-flex uk-flex-between uk-margin">
        <div><h1>{{ entry.title }}</h1></div>
        <div><img src="{{ entry.productImage.one().getUrl('thumb') }}" alt=""></div>
    </div>
    
    {% include '_components/_productContent.twig' with {'productContent': entry.productContent, 'imageSlider': entry.imageSlider} only %}

{% endblock %}

Finally, to add the navigation into my master template, we can loop through the available productRange category sections we’ve created in CraftCMS:

<div class="uk-navbar-right">
                <ul class="uk-navbar-nav">
                    <li><a href="{{ siteUrl }}">Home</a></li>
                    {% nav category in craft.categories.group('productRange').all() %}
                        <li><a href="{{ category.url }}">{{ category.title }}</a></li>
                    {% endnav %}
                </ul>
            </div>

I know I haven’t explained absolutely everything line by line, but my general opinion has been that creating the product pages was pretty easy and very effective. It didn’t take me long, even though I started with zero experience of CraftCMS.

Custom post types in WordPress

To create the same sort of site structure in WordPress, we’ll need to create our own custom post type. To accomplish this, we’ll need our own plugin, which I’m creating in /public/wp-content/plugins/shapemaster/plugin.php. Here’s the main bit of code that makes it all happen:

<?php

add_action('init', function() {
    register_post_type('product', [
        'label' => 'Product',
        'labels' => [
            'name' => 'Products',
            'singular_name' => 'Product'
        ],
        'taxonomies' => ['product_range'],
        'supports' => ['title', 'editor', 'thumbnail', 'revisions'],
        'public' => true,
        'hierarchical' => true,
        'has_archive' => true,
        'rewrite' => [
            'with_front' => false
        ]
    ]);
    register_taxonomy('product_range', 'product', [
        'label' => 'Range',
        'rewrite' => ['slug' => 'range'],
        'hierarchical' => true
    ]);
});

Refresh the WP admin, and as if my magic I have a new section on the left hand navigation for my products, and a new taxonomy for the product ranges (which works much like a category).

Creating the range page is relatively straightforward:

<?php get_header(); ?>

<h1>Range: <?php single_term_title(); ?></h1>

<?php if (have_posts()) : ?>
    <div uk-grid>
        <?php while (have_posts()) : the_post(); ?>
            <div>
                <div class="uk-card uk-card-default uk-card-body">
                    <?php if (has_post_thumbnail()) : ?>
                        <a href="<?php the_permalink(); ?>"><?php the_post_thumbnail('thumbnail'); ?></a>
                    <?php endif; ?>
                    <h4><a href="<?php the_permalink(); ?>"><?php the_title(); ?></a></h4>
                </div>
            </div>
        <?php endwhile; ?>
    </div>
<?php else : ?>
    <p>No products found.</p>
<?php endif; ?>

<?php get_footer(); ?>

I could have created a specific template for the products themselves, but on this occasion (since I was getting bored by this point) I decided to reuse the main index.php file. I was intrigued by the power of CraftCMS’s matrix field type, so I installed SiteOrigin’s Page Builder plugin, which provides a similar sort of experience. I had to install their widget bundle plugin as well, and I ended up creating my own widget to allow me to define a hero section. To be honest, this was the hardest part of the whole process, and the experience for a content editor is far from perfect. But I eventually had it working, and to the eyes of an everyday punter the two sites now look pretty much identical.

Summary

Defining the data structure in the code is far more fiddly than in CraftCMS, and takes longer to get working smoothly. On the other hand, it’s all there in the code, making future development and deployment easier. From a content editor’s perspective, CraftCMS is the clear winner here, as even with the Page Builder plugin the user experience is a bit clunky. Another consideration is that I enjoyed creating the product pages in CraftCMS, whereas in WordPress it was more painful.


Conclusion

This has been a fairly brief look at WordPress and CraftCMS, and neither appears to be 100% perfect. As a developer, I like to define things in code. The trouble is, WordPress’s code is awful. It feels outdated, and doesn’t have the same spirit of elegance that I’m accustomed to with other modern frameworks. Deployment is undoubtedly going to be easier with WordPress, but that alone won’t necessarily make for a better site. And of course we all know about the bloat of third-party plugins and a slew of security considerations…

CraftCMS on the other hand was a joy to use. It installed in moments, uses Twig for its templating, and encourages creativity and flexibility with its content. All good things. Its reliance on storing content structure in the database is cause for concern though, and I really hope that’s something the community finds a solution to one day. But overall, CraftCMS doesn’t hold your hand too much, and gives a competent developer plenty of control over exactly how the site looks and feels.

You hack WordPress to make it work the way you want, whereas you carefully develop CraftCMS from the ground up, which I imagine would result in a more predictable and better website. WordPress has undoubtedly changed the internet in the last few years, but unless it completely revamps itself it’s liable to end up falling out of favour, to be replaced by more forward-thinking platforms like CraftCMS.

Which would I choose for my next project? It depends. Both have their place. But CraftCMS might just pip WordPress to the post…

The post WordPress vs. CraftCMS appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/03/wordpress-vs-craftcms/feed/ 3
Covering a Hornby goods shed https://www.matthewdawkins.co.uk/2018/03/covering-hornby-goods-shed/ https://www.matthewdawkins.co.uk/2018/03/covering-hornby-goods-shed/#respond Wed, 07 Mar 2018 12:53:25 +0000 https://www.matthewdawkins.co.uk/?p=2259 Last night I transformed an old Hornby goods shed into something new. It was a pretty standard building in the Hornby range, with its recognisable shape and brick texture, and I decided that needed changing. I don’t want my railway to look the same as anyone else’s. So I designed […]

The post Covering a Hornby goods shed appeared first on Matthew Dawkins.

]]>
Last night I transformed an old Hornby goods shed into something new. It was a pretty standard building in the Hornby range, with its recognisable shape and brick texture, and I decided that needed changing. I don’t want my railway to look the same as anyone else’s. So I designed and printed off some new textures to cover up all the walls and roof.

To refresh your memory, here’s what it used to look like (although this photo is of a slightly newer model with different colouring).

On mine, the brickwork was a pretty unconvincing brown stone, and the canopy was yellow and broken. So I found some suitable textures on the Sketchup Textures website and created some new panels to stick over the top. I printed it out on card, cut them out, and glued them on with PVA. I also went over the card edges with a black pen, which made it look a lot better, as you’ll see from the photos below.

  

The post Covering a Hornby goods shed appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/03/covering-hornby-goods-shed/feed/ 0
Planning out the station https://www.matthewdawkins.co.uk/2018/03/planning-out-the-station/ https://www.matthewdawkins.co.uk/2018/03/planning-out-the-station/#respond Mon, 05 Mar 2018 11:32:21 +0000 https://www.matthewdawkins.co.uk/?p=2242 Having done some experimentation on an old piece of track, and with the help of a friend, I have taught myself to solder again. It’s taken a few evenings, but I now have the entire layout wired up with cab control. Now I can move on to planning the layout […]

The post Planning out the station appeared first on Matthew Dawkins.

]]>
Having done some experimentation on an old piece of track, and with the help of a friend, I have taught myself to solder again. It’s taken a few evenings, but I now have the entire layout wired up with cab control. Now I can move on to planning the layout of the station.

But before we get into the station, let me share my excitement about cab control. If you saw my previous post, you’ll know that this is a term I picked up from a forum, and immediately saw the value in it for a DC layout. It splits up the track into logical electrical blocks, which can be switched such that either controller can power it. I’m using a common negative rail, and using plastic rail joiners to isolate the positive rail. I have soldered strategically, relying on the conductivity between track, which some may argue isn’t ideal, but it’s good enough for now. I can always go back and add in more feeder wires later if I absolutely need to. Anyway, it now means I can run a train around the loop using one controller and then do a little shunting with the other. Or have one train leaving the station and heading up as far as the top station, and at the same time have another train leaving in the opposite direction and going into the siding at the top. I’ve tested it out, and it really adds another dimension of ‘play’!

With all of that done, I’ve started to focus on the main station, which so far has been a bit of an unknown. I wanted a car park, and a picnic area, but wasn’t sure where either of them was going to be. I also wanted more buildings for visitors, like a gift shop or a model railway or a museum. I also wanted to add some personalisation to the buildings, so that they don’t all look like anyone else’s; my platforms and some of the buildings are standard Hornby kits, and are pretty recognisable at the moment.

The plan now is to have the picnic area over on the right of the platform, near the level crossing. I’ll make it a grassy area, and add some trees and bushes to separate the picnic area from the road. That leaves plenty of room for the car park immediately behind the platform. I’ve also decided to add my old goods loading shed behind the platform on the left hand side, to act as a bit of a museum. It’ll have a piece of track in it, but not connected to anything else, perhaps giving the impression that it might have been a bigger station in years gone by. I’ll probably put an old loco inside for people to look at. The signal box is now at the end of the platform, and I’ll need to make a small flight of steps up to the door.

The level crossings also started to take shape last night. I’m using some offcuts of card, particularly because one of the level crossings is on a curve, simply glued down with neat PVA. To make the transition from board to card even smoother I’ve gone over the edges with some newspaper, again with PVA. Once that’s dry, I’ll be able to paint it up.

While I had the paints out, I also had a go at modifying an old truck I had lying around. One of the features of my layout is that I’ll have a siding at the top of the layout where coal can be loaded on from a lorry (there’s a service road to it from the main road). A tank engine will load up the truck there, then take it down to the main station area where there will be a more convenient coal staithe. This all means having a suitable truck. I have a couple of really small empty ones, but I also had a slightly larger one that was filled with what looks like rock. So I took some acrylic paint to it and made it black. I also added some dry(ish) paint to the sides to add some weathering. Not a bad attempt!

 

Finally, I took inspiration from a Youtube video and coloured in the plastic bits on my points. Yes, I’m using insulfrog rather than electrofrog, because that’s the track I inherited. Not as great for conductivity, but you gotta work with what you got. Anyway, the suggestion was to colour in the black plastic insultation bits with a silver pen to make it look more prototypical. The pen shouldn’t be electrically conductive, so it shouldn’t have any impact on the electricals. The only silver pen I had to hand was a gel pen, so time will tell how long that will last. I don’t really want that rubbing off onto the wheels! But it definitely looks a lot better now.

The next step will be making the buildings look a bit more unique. My plan is to reuse the Hornby buildings but cover them in a printed texture. I’m pretty good with a computer, so this shouldn’t be an unreasonable challenge. The biggest hurdle will be making sure it stays stuck on and doesn’t peel off or wrinkle up. I’ve got some spray lacquer, so maybe that will help.

The post Planning out the station appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/03/planning-out-the-station/feed/ 0
Code quality matters https://www.matthewdawkins.co.uk/2018/02/code-quality-matters/ https://www.matthewdawkins.co.uk/2018/02/code-quality-matters/#respond Tue, 27 Feb 2018 11:31:49 +0000 https://www.matthewdawkins.co.uk/?p=2228 We’ve all been there. Time is short, so just bash out a bit of code, and as long as it works we can move on. Code is for computers anyway, right? Who cares what it looks like? Actually, how our code is formatted can totally save our bacon later, and […]

The post Code quality matters appeared first on Matthew Dawkins.

]]>
We’ve all been there. Time is short, so just bash out a bit of code, and as long as it works we can move on. Code is for computers anyway, right? Who cares what it looks like? Actually, how our code is formatted can totally save our bacon later, and save us time in the long run. Today we’ll be looking at how HTML should be written, and briefly touch upon CSS and PHP too.

Why does accuracy matter?

This might sound like it should be obvious, but it’s best to start here. Take a look at a little bit of code.

<ul><li>Here is a bullet item</ul>

Show this in a browser, and you’ll see that it has the desired effect – it shows a single item in a bullet list. Look closely, though, and you’ll see that the closing </li> is missing. It’s not a problem though, because the browser still knows what to do with it. So it doesn’t matter if it’s technically wrong, right?

Wrong is still wrong, even if it looks right.

There are specifications telling browser manufacturers how code should be rendered, but it’s less clear on how it should look if the code is wrong. This can lead to inconsistencies between browsers, each interpreting your dodgy code in different ways. That unreliability can come back to bite you later on, and you could end up spending more time fixing it than if you had done it properly in the first place.

Now for our second piece of offensive code.

<html>
<body>
<p>Hello world.
<p>Here is another line.

If you were to run this through the W3C HTML Validator, it would complain at you. It’s not valid. It’s not correct. It’s incomplete. But if you open it in a browser it will display exactly the same as if it were perfect. But while in this minimal example you could argue that it doesn’t matter, it would at scale. Once you’ve got a whole site of hideous invalid code, you’re sailing in dangerous waters.

So know your stuff. If you’re not sure how code should be used, look it up. Don’t guess, don’t settle for anything less than perfect. You’ll thank me later.

Valid HTML

I feel it’s about time for a beautiful piece of perfect code.

<!doctype html>
<title>.</title>

That, as far as I can work out, is the smallest valid HTML5 document possible. It breaks no rules. Run it through a validator and it will be all smiles and thumbs up. Not particularly useful, admittedly, but at least it’s valid. Turns out the <html>, <head> and <body> elements aren’t necessarily essential. It really is worth checking the specs every now and then, just to make sure you understand what’s valid and what isn’t.

Why does formatting matter?

This is another contentious issue, especially for beginners. Formatting makes absolutely no difference to computers. A badly formatted piece of code will likely produce exactly the same result as a perfectly formatted piece of code, as long as it’s technically accurate. Case in point, here is a valid but pretty incomprehensible piece of HTML:

<!doctype html><html lang="en"><head><meta charset="utf-8"><meta http-equiv="x-ua-compatible" content="IE=edge"><title>Hello world</title><meta name="description" content="Just a test"><meta name="viewport" content="width=device-width, initial-scale=1"><link rel="stylesheet" href="css/style.css"></head><body><h1>Hello world</h1><script src="js/app.js"></script></body></html>

It’s perfectly valid, but it’s been compressed, taking all the formatting out of it. And a browser doesn’t care in the slightest.

But – and here’s the crunch point – computers aren’t the ones writing the code, we are! We have to understand it too. Clear code is far easier to work with, especially if you’re picking up someone else’s code. Clear code also makes bugs easier to find and avoid. For comparison, look at the exact same code when it’s formatted properly:

<!doctype html>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <meta http-equiv="x-ua-compatible" content="IE=edge">
        <title>Hello world</title>
        <meta name="description" content="Just a test">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <link rel="stylesheet" href="css/style.css">
    </head>
    <body>
        <h1>Hello world</h1>
        <script src="js/app.js"></script>
    </body>
</html>

Indentation makes the code sooooo much easier on the eye. You can visually see the hierarchy. You can see where an element ends without having to find the closing tag. And if your code editor happens to support syntax highlighting (it does, doesn’t it?) then you’ll also be able to see which are tags, which are parameters and which are values.

So what should it look like?

Thankfully, we’re not exactly fumbling in the dark when it comes to how best to format our code. People have talked about this for years, and there are established recommendations on how we should do it. Here is a brief summary to get you started:

  • Use lowercase in tags.
    • <a href="somewhere.html">Link</a>
        YES!
    • <A HREF="somewhere.html">Link</A>
        NO
    • <A Href="somewhere.html">Link</a>
        NO
  • Use 4 spaces for indentation. Code editors can usually be told to automatically maintain your current indentation level, and you can often specify whether you want to indent with tabs or spaces. 4 spaces is the recommendation.
  • Know which elements are self-closing. Here are a few examples:
    • <meta name="description" content="Hello world.">
        Self-closing
    • <img src="bunny.jpg" alt="Here's a bunny">
        Self-closing
    • <p>Here is a paragraph.</p>
        Needs closing tag
    • <script>console.log('Hello world');</script>
        Needs closing tag
  • Use inline and block elements correctly. Inline elements can be nested inside block elements. Block elements shouldn’t be nested inside inline elements. And some elements automatically close others if used incorrectly.
    • <p><span>Hello world</span></p>
        Inline inside block = YES
    • <span><p>Hello world</p></span>
        Block inside inline = NO
    • <div><p>Hello world</p></div>
        YES
    • <p><div>Hello world</div></p>
        NO (child <div> will automatically close the opening <p> tag, making the closing </p> tag invalid)
  • Stop using old elements
    • Use <strong> instead of <b>
    • Use <em> instead of <i>
    • Never use <blink>
  • Use double quotes for HTML parameters. This is for consistency, and especially comes into play when using inline Javascript.
    • <a href="something.html">Link</a>
        YES
    • <a href='something.html'>Link</a>
        NO (technically valid, but not recommended)

Don’t forget the styles

The same principles apply to your CSS: the computer won’t care if it’s ugly, but you should. Again, here are some quick pointers:

  • Indent with 4 spaces
  • Use spaces correctly
    • font-weight: bold;
        YES
    • font-weight:bold;
        NO
  • Don’t over-specify elements
    • #emphasis {}
        YES
    • .something #emphasis {}
        NO
  • Use comments to add clarity
  • Separate files for context
    • For example: layout.css, type.css, header.css, blockquotes.css, product.css
  • Don’t combine styles onto one line

Depending on who you talk to, there are some other conventions you could adhere to, like ordering your CSS attributes alphabetically, or by function. There’s also a case to be made for always using classes over IDs, but that’s up to you.

And finally, the PHP

Rather than trying to summarise this and get it wrong, I would instead direct you to the excellent PSR-1 and PSR-2 documents, which outline exactly what PHP should look like. But, to whet your appetite, here is a bit of PHP code (taken from the PSR-2 documentation) that keeps the pedants happy:

<?php
namespace Vendor\Package;

use FooInterface;
use BarClass as Bar;
use OtherVendor\OtherPackage\BazClass;

class Foo extends Bar implements FooInterface
{
    public function sampleMethod($a, $b = null)
    {
        if ($a === $b) {
            bar();
        } elseif ($a > $b) {
            $foo->bar($arg1);
        } else {
            BazClass::bar($arg2, $arg3);
        }
    }

    final public static function bar()
    {
        // method body
    }
}

Note the use of line breaks, spaces, blank lines, brace positions, capitalisation, indentation, and function visibility. If this were a real-world bit of code I would want more comments in there, but it’s a good start!

Resources

To get you going in the right direction from the outset, I’d highly recommend you take a look at html5boilerplate.com. It’s available as a Git repository, if you’re into that kind of thing (and if you’re not, you should be), or you can just download it as a zip and get coding. Think of it as a foundation of good quality code on which you can build your actual web page.  Here are some other HTML related resources you might appreciate:

And some similar resources for CSS:

And finally, some for PHP:

Also, make sure you’re using a code editor that is fit for purpose. Don’t use Notepad.exe. Use something like Atom, Brackets, Sublime Text, or even PHPStorm if you’re properly serious.

The post Code quality matters appeared first on Matthew Dawkins.

]]>
https://www.matthewdawkins.co.uk/2018/02/code-quality-matters/feed/ 0