Privacy By Design: How To Sell Privacy And Make Change

Privacy By Design: How To Sell Privacy And Make Change

Privacy By Design: How To Sell Privacy And Make Change

Joe Toscano


Privacy is a fundamental human right that allows us to be our true selves. It’s what allows us to be weirdos without shame. It allows us to have dissenting opinions without consequence. And, ultimately, it’s what allows us to be free. This is why many nations have strict laws concerning privacy. However, in spite of this common understanding, privacy on the Internet is one of the least understood and poorly defined topics to date because it spans a vast array of issues, taking shape in many different forms, which makes it incredibly difficult to identify and discuss. However, I’d like to try to resolve this ambiguity.

In the United States, it is a federal offense to open someone’s mail. This is considered a criminal breach of privacy that could land someone in prison for up to five years. Metaphorically speaking, each piece of data we create on the Internet — whether photo, video, text, or something else — can be thought of as parcel of mail. However, unlike opening our mail in real life, Internet companies can legally open every piece of mail that gets delivered through their system without legal consequence. Moreover, they can make copies of it as well. What these companies are doing would be comparable to someone opening our mail, copying it at Kinkos, then storing it in a file cabinet with our name on it and sharing it with anyone willing to pay for it. Want to open that file cabinet or delete some of the copies? Too bad. Our mail is currently considered their property, and we have almost no control over how it gets used.

Could you imagine the outrage the public would experience if they found out that the postal service was holding their mail hostage and selling it to whoever was willing to pay? What’s happening with data on the Internet is no different, and it’s time this changes.

It’s more than just a matter of ethics that this happens, it’s a matter of basic human rights.

The problem with making the changes that need to be made (without changes being forced into place by regulation) is putting dollar signs to the issues. What is the financial return on a 20,000-hour engineering investment to improve consumer privacy standards? Are consumers demanding these changes? Because if it doesn’t make a fiscal return and consumers aren’t demanding it, then why should change be made? And even if they are and there is a return, what does 20,000 hours of investment even look like? What is going to be put on the product roadmap and when? These are all valid concerns that need to be addressed in order to help us move forward effectively. So, let’s discuss.

Recommended reading: Using Ethics In Web Design

Do Consumers Want It?

The answer to this question is a hard yes. Findings by Pew Research Center show that 90 percent of adults in the United States believe it is important that they have control over what information is collected about them, 93 percent believe it’s important they can control who has access to this information, and 86 percent have taken steps to remove or mask their digital footprints. Similar numbers were discovered about Europeans in doteveryone’s 2018 Digital Attitudes report. Despite these numbers, 59 percent still feel like it is impossible to remain anonymous online, 68 percent believe current laws do not do enough to protect their privacy, and only 6 percent are “very confident” that government agencies can keep them secure.

Now, I know what you’re thinking. This is consumer demand, and until those consumers start leaving old products behind, there’s no fiscal reason to make any change. And (although I don’t agree with your logic) you’re right. Right now there is little fiscal reason to make any change. However, when consumer demand reaches a critical mass, things always change. And the businesses that lead the way before the change is demanded always win in the long run. Those who refuse to make a change until they’re forced to always feel the most pain. History shows this to be the truth. But what’s going to happen in legislation that will change business so much? Great question.

What’s about to happen to data protection and privacy standards across the world, through regulation, will not be so different than what occurred less than a decade ago when consumers demanded protection from spam emails, which resulted in the CAN-SPAM Act in the United States — but on a much greater scale, and with exponentially greater impact. This legislation, which was created because consumers were sick of getting spam emails, set the rules for commercial email, established requirements for commercial messages, gave recipients the right to have individuals and companies stop emailing them, and spelt out tough penalties for violations. As we enter a period where consumers are beginning to understand just how badly they’ve been deceived (for years, giving people intimate control of their data will undoubtedly be the future of data collection) — whether that be through free will or legislation. And those who choose to move first will win. Don’t believe me?

Consider the fact that engineers can get in legal trouble for the code they write. Apple Watch, Alexa, and FitBit data, among others have been used as evidence in court, changing consumer perception of their data. Microsoft and the Supreme Court of the United States went to court earlier this year to define where physical borders extend in cloud-based criminal activity, the beginning of what will be a long fight. These examples are just a peek into what’s coming. The people are demanding more, and we’re reaching the tipping point.

The first to take steps to respond to this demand is the EU, which established the GDPR, and now policymakers in other countries are beginning to follow suit, working on laws in their country to define our cyber future. For example, United States Senate Intelligence Committee Vice Chairman, Mark Warner recently laid out some of ideas in a summary report just a couple months ago, demonstrating where legislation may soon be headed in the States. But it’s not just the progressives who believe this to be the future; even right-wing influencers like Steve Bannon think we need regulation.

What we’re seeing is a human reaction to incredible manipulation. No matter how domesticated we may be compared to previous generations, people will always push back when they feel they’re being threatened. It’s a natural reaction that has allowed us to survive for millennia. Today, tech has become more than just a consumer-facing industry. It is now also becoming a matter of national security. And for this reason, there will be a reaction whether we like it or not. And it will be better if we come out with a strategy to prepare instead of getting swept under the rug. So, what’s the financial return you ask? Well, how much is your business worth? That’s how much.

Recommended reading: How GDPR Will Change The Way You Develop

For a simple framework of what exactly needs to be addressed and why, we can hold several truths to be foundational in the creation of digital systems:

  1. Privacy must be proactive, not reactive, and must anticipate privacy issues before they reach the user.
    These issues are not issues that we want to deal with after a problem has come to life but are instead issues we want to prevent entirely, if possible.

  2. Privacy must be the default setting.
    There is no “best for business” option in regards to privacy; this is an issue that is about what’s best for the consumer, which, in the long run, will be better for the business. We can see what happens when coercive flaws are exposed to the public through what happened to Paypal and Venmo in August 2018 when Public by Default was released to the public, bringing a smattering of bad press to the brand. More of this is sure to come to the businesses that wait for something bad to happen before making a change.

  3. Privacy must be positive sum and should avoid dichotomies.
    There is no binary relationship to be had with privacy; it is a forever malleable issue that needs constant oversight and perpetual iteration. Our work doesn’t end at the terms and service agreement, it lasts forever, and should be considered a foundational element of your product that evolves with the product and enables consumers to protect themselves — not one that takes advantage of their lack of understanding.

  4. Privacy standards must be visible, transparent, open, documented and independently verifiable.
    There’s no great way to define a litmus test for your privacy standards, but a couple of questions we should all ask ourselves as business people are: First, if the press published your privacy agreement, would it be understandable? Second, if it were understandable, would consumers enjoy what they read? And last but not least, if not, what do you need to change?

These principles will be highly valuable foundations to keep in mind as products are built and evolve. They represent quick and easy questions to ask yourself and your team that will allow you to have a good baseline of ethics, but for a lengthier piece on legal foundations you can read more from Heather Burns, who outlined several additional principles last year on Smashing. And for a full list of things to inspect during a Privacy Impact Assessment (PIA), you can also check out how assessments are done according to:

But before rushing off to make changes in your product, first let’s point out some of the current flaws out in the wild and talk about what change might look like once they are implemented properly.

How To Make Change

One of the biggest problems with US privacy practices is how hard it is to understand terms and service agreements (T&S), which play a major role in defining privacy but tend to do so very poorly. Currently, users are forced to read long documents full of legal language and technical jargon if they hope to understand what they’re agreeing to. One study actually demonstrated that it would take approximately 201 hours (nearly ten days) per year for the average person to read every privacy policy they encounter on an annual basis. The researchers estimated that the value of this lost time would amount to nearly $781 billion per year, which is beyond unacceptable considering these are the rules that are supposed to protect consumers — rules that are touted to be easy and digestible. This puts consumers in a position where they’re forced to opt-in without truly understanding what they’re getting into. And in many cases it’s not even the legal language that’s coercive, it’s the way options are given, in general, as clearly proven across various experiences:

This image shows what many sites currently do to collect consent in a way that assumes consent and gets the consumer to agree in a way that is dangerous.
When consent is collected this way, it is assumed. (Large preview)

The example given above is generic wireframe, but I chose to do this because we’ve all seen patterns like this and others like it that are related to collecting more specific types of data. I could list specific examples, but the list would go on forever and there’s no reason to list off specific companies demonstrating manipulative patterns because these patterns (and other, very similar patterns) can be found on nearly every single website or app on the Internet. There’s one major problem with asking for consent this way: Consumers aren’t allowed to not accept terms and services without several extra steps, lots of reading, and often much more. This is a fundamental flaw that needs to be addressed because asking for consent means there needs to be an option to say no, and in order to know whether “no” is the best option, consumers need to understand what they’re consenting to. However, products aren’t built that way. Why? Well, it’s best for business.

If we really sit and think about this, what’s easy to see but let go unrecognized is that companies spend more time creating splash pages to explain how to use the app than we do to explain what data is being collected and why. Why? Simple changes to the way T&S agreements are made would not only make consumers more aware of what they’re signing up for, but also allow them to be more responsible consumers. We can see some of these changes already being made due to the impact the GDPR has been having across the world. In many European nations, it is not uncommon for consent to be asked through modals like these:

In this image, the benefits of giving consent have been recognized, but there is still room for improvement.
In this image, the benefits of giving consent have been recognized. (Large preview)

This first example is a good step forward. It tells the consumer what their data will be used for, but it’s still lacking transparency about where the data will be going and giving priority to the agreement without an option to decline. It also jams everything into a single body of text, which makes the information much less digestible.

A better example of how this might be designed is something like the modal below, which is now common among many European sites:

This image shows how consent is asked across many sites in Europe, in order to comply with the General Data Protection Regulation (GDPR) and give consumers better options. It’s still not what they need though.
After GDPR compliance became an issue, many more options were given but improvements could still be made. (Large preview)

This gives consumers a comprehensive understanding of what their data will be used for and does it in a digestible manner. However, it still lacks any significant information about where the data will be going after they consent. There’s not a single clue as to where their data will be shared, who it will be shared with, and what limitations exist within those agreements. While this is much better than the majority of options on the web, there are still improvements to be made.

Third-Party Login Prompt

For example, when using a third-party service to log into your platform, consumers should be made well aware of the following:

  1. What data is going to be taken from the third-party;
  2. What it’s being used for and how it might affect their experience if you don’t have access to it;
  3. Who else has or might have access to it.

To implement this in a way that gives the consumer control, this experience should also allow consumers to opt-in to individual parts of the collection, not be forced to agree to everything or nothing at all.

This mock-up shows a generic version of how a site might ask for consent when using a 3rd party login on their site. It demonstrates how listing a bunch of variables is no good and why we need to ask for consent on each point individually.
By forcing consumers to check off at each point, it adds friction to the process, yes, but also makes sure the content is digestible. (Large preview)

This would make the T&S digestible and allow consumers to opt into what they truly agree to, not what the company wants them to agree to. And to make sure it’s truly opt-in, the default should be set to opt-out. This would be a small change that would make a dramatic difference in the way consent is asked for. Today, most companies blanket this content in legal jargon to hide what they’re really interested in, but the days of asking for consent in this way are quickly coming to an end.

If you’re providing consumers with a meaningful service, and doing so ethically, these changes shouldn’t be an issue. If there is a true value to the service, consumers are not going to resist your ask. They just want to know who they can and cannot trust, and this is one simple step that can help your business prove its trustworthiness.

Single- And Multi-Point Data Collection Requests

Next, when it comes to creating understandable T&S agreements for your platform, we have to consider how this might play out more contextually — within the application experience. Keep in mind that if it’s all given up front, that’s not digestible. For this reason, data collection request should happen contextually, when the consumer is about to use part of your service that requires an extra layer of data to be collected.

To demonstrate how this ask may occur, here are a couple of examples of what a single- and multi-point data collection request might look like:

These mockups show what it might look like when asking for permission to use specific points of data. This includes showing people what their data is being used for, why it’s important to that process, and allowing them to opt-in or -out of each individual point, without having to read through a long list of legal jargon in the terms and service agreement.
Single- and multi-point data requests can be designed to reduce the complexity of current terms of service agreements. (Large preview)

Breaking the T&S down into digestible interaction points within the experience instead of asking the user for everything up front allows them to get a better understanding of what’s going on and why. If you don’t need the data to improve the experience, why is it being collected? And if it’s being collected for frivolous reasons that only benefit the company, then be honest. That’s just basic honesty, which unfortunately is considered revolutionary, progressive customer service in the modern world.

The biggest key to these initial asks is that none of this should be opt-in by default. All initial triggers should give the people using the tool to opt-in if they choose and use it without opting in if they choose. The days of forced opt-in (or, worse yet, coercive opt-in) are coming to an abrupt halt, and those who lead the way will stay ahead of the pack for a long time to come.

Data Control Center

Beyond asking for consent in a meaningful way, it will also be important that we give consumers the ability to control their data post-hoc. Consumers’ access to control their data should not end at the terms and service agreement. Somewhere in their account controls, there should also be a place (or places) where consumers can control their data on the platform after they’ve invested time with the service. This area should show them what data is being collected, who it’s being shared with, how they can remove it, and much more.

This mockup displays the way to create data controls that inform the consumer not only of what data is being used, but where it’s going, and it allows consumers to intimately control those flows of data in a way they feel safe with.
While we can often download our data now, we generally have no, or very little, control over it. This needs to change. (Large preview)

The idea of full data control may seem incredibly liberal, but it is no doubt the future. And as the property of the consumer creating the data, it should be considered a basic human right. There’s no reason why this should be a debate at this point in history. Data represents the story of our lives — collectively — and combined it creates vast amounts of power against those who create it, especially if we allow the systems to remain black boxes. So, beyond giving consumers access to their data, as we’ve discussed in the previous sections, we’ll also need to make the experience more understandable so that consumers can defend themselves.

Create Explainable AI

While it is incredible to get a suggested result that shows us things we want before we even knew we wanted them, this also puts machines in a powerful position they are not yet ready to uphold alone. When machines are positioned as experts and perform at a level that is intelligent enough to pass as such, the public will generally trust them until they fail. However, if machines fail in ways the public is incapable of understanding, they will remain expert despite their failure, which is one of the greatest threats to humanity.

For example, if someone were to use a visual search tool to identify the difference between an edible mushroom and a poisonous mushroom, and they didn’t know that the machine told them a poisonous mushroom was safe, that person could die. Or what happens when a machine determines the outcome of a court case and isn’t required to provide an explanation for its decision? Or worse yet, what about when these technologies are used for military purposes and are given the right to use lethal force? That last situation might sound extreme, but it is an issue that is currently being debated within the United Nations.

To ensure the public is capable of understanding what’s happening behind the scenes we need to create what DARPA calls explainable artificial intelligence (XAI) — tools that explain how machines make their decisions and the accuracy with which these tasks have been achieved. This isn’t about giving trade secrets away but allowing consumers to feel like they can trust these machines and defend themselves if an error were to occur.

Although it is not based in artificial intelligence, a good example of what this might look like is CreditKarma, which allows people to have a better understanding of their credit score — a system that used to be hidden just like algorithms are today. This tool allows consumers to have a better understanding of what’s happening behind the scenes and debate the legitimacy of their results if they believe the system has failed. Similar tools are being created with systems like Google’s Match score on Maps and Netflix Percent Match on shows but these systems are just beginning to scratch the surface of explainable AI.

The image includes screenshots from Google Maps and Netflix to demonstrate how it might look to make a system that explains its decisions. Both are good first steps, but there is much improvement to be made.
Here we see systems that attempt to explain the machine’s decision on a very superficial level. This is a good start, but we need better. (Large preview)

Despite these efforts, most algorithms today dictate our experience based on what a company thinks we want. But consumers should no longer be invisibly controlled by large, publicly traded corporations. Consumers should have the right to control their own algorithm. This could be something as simple as letting them know what variables are used for what parts of the experience and how changing the weights of each variable will impact their experience, then giving them the ability to tweak that until it fits their needs — including turning the algorithm off completely, if that’s what they prefer. Whether this would be a paid feature or a free feature is still up for debate, but what is not debatable is whether this freedom should be offered.

The mockup shows how we might make artificial intelligence that explains itself in a way that consumers are capable of using and controlling themselves, instead of the control being left to the hands of privately held or publicly traded corporations.
Algorithm controls will be the future of business. Could this be a way to generate service revenue instead of relying solely on ads? Should it be free? (Large preview)

While the example above is a generic proposal, it begins to imagine how we might make the experience in more specific situations. By giving consumers the ability to understand their data, the way it’s being used, and how that affects their lives, we will have designed a system that puts consumers in control of their own freedom.

However, no matter how well these changes are made, we must also realize that giving people better control of their privacy does not automatically imply a safer environment for consumers. In fact, it may make things worse. Studies have shown that giving people better control of their data actually makes it more likely that they’ll provide more sensitive information. And if the consumer is unaware of how that data may be used (even if they know where it’s being shared), this puts them in harm’s way. In this sense, giving consumers better control of their data and expecting it to make the Internet safer is like putting a nutrition label on a Snickers and expecting it to make the candy bar less fattening. It won’t, and people are still going to eat it.

While I do believe that consumers have a fundamental right to better privacy controls and greater transparency, I also believe it is our job, as data-literate technologists to not only build better systems but also to help the public understand Internet safety. So, the last step in bringing this together is to bring awareness to the fact that control isn’t all consumers need. They also need to understand what is happening on the backend — and why. This doesn’t necessarily mean providing them with source code or giving away their IPs, but at least providing them with enough information to understand what’s going on at a base level, as a matter of safety. And in order to achieve this, we’ll need to push beyond our screens. We’ll need to extend our work into our communities and help create that future.

Recommended reading: Designing Ethics: Shifting Ethical Understanding In Design

Incentivize Change

Giving up privacy is something the population has been corralled into due to the monopolies that exist in the tech world, consumers’ misunderstanding of why this is so dangerous within, and a lack of tactical solutions associated with fiscal returns. However, this is a problem that needs to be solved. As Barack Obama noted in his administration’s summary of concerns about internet privacy:

“One thing should be clear: Even though we live in a world in which we share personal information more freely than in the past, we must reject the conclusion that privacy is an outmoded value. It has been at the heart of our democracy from its inception, and we need it now more than ever.”

Creating trustworthy and secure data-sharing experiences will be one of the biggest challenges our world will face in the coming decades.

We can look at how Facebook’s stock dropped 19 percent in one day after announcing they’re going to re-focus on privacy efforts as proof of how difficult making these changes may be. This is because investors who have recently been focused on the short-term revenue growth know how badly companies need to implement better strategies, but also realize the cost involved if the public starts to question a business — and Facebook’s public statement admitting this startled the sheep.

While the process will not be easy (and at many times may be painful), we all know that privacy is the soft underbelly of tech and it’s time to change that. The decisions being made today will pay off big in the long run; a stark difference to the short-term, quarterly mindset that has come to dominate business in the past decade or so of growth. Thus, discovering creative ways to make these issues a priority for all stakeholders should be considered essential for businesses and policymakers alike, which means our job as technologists needs to extend beyond the boardroom.

For example, a great way to incentivize these changes beyond discussing the numbers and issues brought up in this article would be through tax breaks for companies that allocate large amounts of their budget to improving their systems. Breaks could be given to companies that decide to supply regular training or workshops for their staff to help make privacy and security a priority in the company culture. They could be given to companies that hire professional hackers to find loopholes in their systems before attacks occur. They could be given to those who allocate large amounts of hours to restructuring their business practices in a way that benefits consumers. In this sense, such incentives would not be so different than tax breaks given to businesses that implement eco-friendly practices.

The idea of tax breaks may sound outrageous to some, but incentives such as these would represent a more proactive solution than the way things are handled now. While it may feel good to read a headline stating “Google fined a record $5 billion by the EU for Android antitrust violations,“ we must keep in mind that fines like this only represent a small fraction of such companies’ revenue. Combine this with the fact that most cases take several years or decades to conclude, and that percentage only gets smaller. With this for consideration, the idea of tax breaks can be approached from a different perspective, which is that they are not about rewarding previously negligent behavior but about increasing public safety in a way that is in the best interest of everyone involved. Maintaining our current system, which allows companies to string out court cases while they continue their malpractices is just as, if not more, dangerous than having no laws at all.

If you enjoyed reading this article and think others should read it as well, please help spread the word.

This article is the beginning of a series of articles I will be writing about dedicated to Internet safety, in which I will work to put fiscal numbers to ethical design patterns so that we, as technologists can change the businesses we’re building and create a better culture surrounding the development of internet-connected experiences.

Smashing Editorial
(il, ra, yk)

Source link

Google's Android Management API Exits Beta, Adds Features

Google has bolstered the capabilities of its Android Management API with new work profiles and device support. This makes it a more well-rounded tool for businesses to take advantage of when provisioning and managing Android devices deployed across their enterprise.

Source link

Microsoft Announces Public Preview of SQL Server 2019

Earlier this week during Microsoft’s Ignite conference, the company made several announcements, including full support for the Azure Cosmos DB Cassandra API, general availability of the Microsoft Graph Security API, and a public preview of SQL Server 2019.

Source link

Test out the cloud platform developers love for free with a $100 credit

(This is a sponsored post.)

DigitalOcean invites you to experience a better, faster and simpler cloud platform designed to scale based on your needs. Get started for free with a $100 credit toward your first project and discover why the most innovative companies are already hosting on DigitalOcean.

Direct Link to ArticlePermalink

The post Test out the cloud platform developers love for free with a $100 credit appeared first on CSS-Tricks.

Source link

Weather Underground API Retiring at the End of 2018

Weather Underground just announced the end of support for the Weather Underground API. Developers who subscribe and utilize the API will continue to have access until December 31, 2018.

Source link

Nested Links

The other day I posted an image, quite literally as a thought exercise, about how you might accomplish “nested” links. That is, a big container that is linked to one URL that contains a smaller container or text link inside of it that goes to another URL. That’s harder than it might seem at first glance. The main reason being that…

<!-- this is invalid and won't render as expected -->
<a href="#one">
  <a href="#two">

Eric Meyer once called for more flexible linking, but even that doesn’t quite handle a situation where one link is nested inside another.

Here’s what happens with that HTML, by the way:

The nested link gets kicked out.

My first inclination would be to simply not nest the links in the markup, but make them appear nested visually. Some folks replied to the tweet, including Nathan Smith, who shared that same thought: have a relatively positioned parent element and absolutely position both links. The larger one could fill up the whole area and the smaller one could sit on top of it.

See the Pen “Nested” links by Nathan Smith (@nathansmith) on CodePen.

It’s finicky, as you’ll need magic numbers to some degree to handle the spacing and variable content.

My second inclination would be to deal with it in JavaScript.

  style="cursor: pointer;"
  <a href="">

I have literally no idea how kosher that is from an accessibility perspective. It looks gross to me so I’m just going to assume it’s bad news.

Speaking of accessibility, Heydon Pickering has a whole article about card components which is a popular design pattern where this situation often comes up. His solution is to have a relatively positioned parent element, then a normally-placed and functional main link. That first link has an absolutely positioned pseudo-element on it covering the entire card. Any sub-links are relatively positioned and come after the initial link, so they’d sit on top of the first link by way of z-index.

Demo with author link.

And speaking of stacking pseudos, that’s the approach Sean Curtis uses here:

See the Pen Pretend nested links by Sean Curtis (@seancurtis) on CodePen.

Other solutions in the “crafty” territory might be:

Sara Soueidan responded with her own post!

I had the same requirement a couple of years ago when I was building the front-end foundation for Smashing Magazine. So I thought I’d write my response to Chris’s thread out in the form of a blog post.

Sara has written about this with much more detail and care than I have here, so definitely check that out. It looks like both she and Heydon have landed on nearly the same solution, with the pseudo-element cover that contains sub-links poking above it as needed.

Have you ever done it another way? Plenty of UX and a11y to think abbout!

The post Nested Links appeared first on CSS-Tricks.

Source link

Mailchimp Unveils Quirky Rebrand

If any single tech company embodies the spirit of web-savvy, then it is Mailchimp. Since its beginnings as a side-project in the early-2000s the marketing service has walked the line between creative experiences, and simple usability. Mailchimp is one of those companies that saunters onto the court, lobs a shot over its shoulder, and gets nothing but net.

Now, with their latest rebrand courtesy of brand agency Collins (as ever, alongside an in-house team) Mailchimp has got almost everything right. Almost.

Chimp lovers will be relieved to discover that Freddie has survived the rebrand, and remains as the logomark, albeit redrawn in a simpler form. He’s lost his “M”, a bit of fur’s gone, the ear’s simpler. Essentially Freddie is more usable, more translatable, more international.

The most visually arresting element of the rebrand is the new brand color. Yellow is tough to design with, but it’s by far the most satisfying color when it’s got right, which in this case it is. It is used to tie the whole identity together in a way that wouldn’t work with anything less bold.

The most interesting—not necessarily in a good way—decision has been to abandon Jessica Hische’s much-loved redrawing of the original Mailchimp script. It’s been replaced with an oddly proportioned, retro-feel sans that lacks rhythm, and the syllables of which are crowbarred apart by an obnoxious “c”; strange given that the brand is keen to deemphasise that letter—it’s “Mailchimp” now, not “MailChimp”. There’s a half-baked explanation offered about the script’s incompatibility with the Freddie logomark. Initially I hated the new logotype; a hour later, I loved it; now I’m back to hating it again. The logotype seems destined to divide opinion, but at least it isn’t a geometric sans-serif.

Coupled with this logotype Mailchimp has adopted Cooper Light as its corporate typeface, giving everything a distinctly 1970s feel.

It’s not really any surprise that Mailchimp have labored to retain their quirky edge, it is after all what made them stand out (they have “chimp” in their name!) but what might come as a surprise is just how quirky Mailchimp have gone, particularly with their illustrations, which lie somewhere between Dr Seuss, and Quentin Blake, by way of Tove Jansson. The black and white illustrations with a strategic touch of brand yellow are sourced from illustrators around the world. (Although individual illustrations haven’t been attributed, several appear to be in the distinctive hand of Amber Vittoria.)

Mailchimp have also introduced a brand photography style that is easy to overlook amidst the joyful illustration. The photo examples themselves are well-taken, but their inclusion feels superfluous.

The rebrand is mostly excellent. The quirkiness is courageous and fitting. The color choice is striking. The type is debatable. The photography is questionable. But the whole is nothing if not fun. The biggest success is that despite growth—over 1 billion emails per day, 14,000 new users daily, $525m annual revenue—Mailchimp hasn’t lost sight of what made it a tool we wanted to use in the first place.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!


p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Source link

Build a Simple REST API with Node and OAuth 2.0

This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible.

JavaScript is used everywhere on the web – nearly every web page will include at least some JavaScript, and even if it doesn’t, your browser probably has some sort of extension that injects bits of JavaScript code on to the page anyway. It’s hard to avoid in 2018.

JavaScript can also be used outside the context of a browser, for anything from hosting a web server to controlling an RC car or running a full-fledged operating system. Sometimes you want a couple of servers to talk to each other, whether on a local network or over the internet.

Today, I’ll show you how to create a REST API using Node.js, and secure it with OAuth 2.0 to prevent unwarranted requests. REST APIs are all over the web, but without the proper tools require a ton of boilerplate code. I’ll show you how to use a couple of amazing tools that make it all a breeze, including Okta to implement the Client Credentials Flow, which securely connects two machines together without the context of a user.

Build Your Node Server

Setting up a web server in Node is quite simple using the Express JavaScript library. Make a new folder that will contain your server.

$ mkdir rest-api

Node uses a package.json to manage dependencies and define your project. To create one, use npm init, which will ask you some questions to help you initialize the project. For now, you can use standard JS to enforce a coding standard, and use that as the tests.

$ cd rest-api

$ npm init
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg>` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: (rest-api)
version: (1.0.0)
description: A parts catalog
entry point: (index.js)
test command: standard
git repository:
license: (ISC)
About to write to /Users/Braden/code/rest-api/package.json:

  "name": "rest-api",
  "version": "1.0.0",
  "description": "A parts catalog",
  "main": "index.js",
  "scripts": {
    "test": "standard"
  "author": "",
  "license": "ISC"

Is this OK? (yes)

The default entry point is index.js, so you should create a new file by that name. The following code will get you a really basic server that doesn’t really do anything but listens on port 3000 by default.


const express = require('express')
const bodyParser = require('body-parser')
const { promisify } = require('util')

const app = express()

const startServer = async () => {
  const port = process.env.SERVER_PORT || 3000
  await promisify(app.listen).bind(app)(port)
  console.log(`Listening on port ${port}`)


The promisify function of util lets you take a function that expects a callback and instead will return a Promise, which is the new standard as far as handling asynchronous code. This also lets us use the relatively new async/await syntax and make our code look much prettier.

In order for this to work, you need to install the dependencies that you require at the top of the file. Add them using npm install. This will automatically save some metadata to your package.json file and install them locally in a node_modules folder.

Note: You should never commit node_modules to source control because it tends to become bloated quickly, and the package-lock.json file will keep track of the exact versions you used to that if you install this on another machine they get the same code.

$ npm install express@4.16.3 util@0.11.0

For some quick linting, install standard as a dev dependency, then run it to make sure your code is up to par.

$ npm install --save-dev standard@11.0.1
$ npm test

> rest-api@1.0.0 test /Users/bmk/code/okta/apps/rest-api
> standard

If all is well, you shouldn’t see any output past the > standard line. If there’s an error, it might look like this:

$ npm test

> rest-api@1.0.0 test /Users/bmk/code/okta/apps/rest-api
> standard

standard: Use JavaScript Standard Style (
standard: Run `standard --fix` to automatically fix some problems.
  /Users/Braden/code/rest-api/index.js:3:7: Expected consistent spacing
  /Users/Braden/code/rest-api/index.js:3:18: Unexpected trailing comma.
  /Users/Braden/code/rest-api/index.js:3:18: A space is required after ','.
  /Users/Braden/code/rest-api/index.js:3:38: Extra semicolon.
npm ERR! Test failed.  See above for more details.

Now that your code is ready and you have installed your dependencies, you can run your server with node . (the . says to look at the current directory, and then checks your package.json file to see that the main file to use in this directory is index.js):

$ node .

Listening on port 3000

To test that it’s working, you can use the curl command. There are no endpoints yet, so express will return an error:

$ curl localhost:3000 -i
HTTP/1.1 404 Not Found
X-Powered-By: Express
Content-Security-Policy: default-src 'self'
X-Content-Type-Options: nosniff
Content-Type: text/html; charset=utf-8
Content-Length: 139
Date: Thu, 16 Aug 2018 01:34:53 GMT
Connection: keep-alive

<!DOCTYPE html>
<html lang="en">
<meta charset="utf-8">
<pre>Cannot GET /</pre>

Even though it says it’s an error, that’s good. You haven’t set up any endpoints yet, so the only thing for Express to return is a 404 error. If your server wasn’t running at all, you’d get an error like this:

$ curl localhost:3000 -i
curl: (7) Failed to connect to localhost port 3000: Connection refused

Build Your REST API with Express, Sequelize, and Epilogue

Now that you have a working Express server, you can add a REST API. This is actually much simpler than you might think. The easiest way I’ve seen is by using Sequelize to define your database schema, and Epilogue to create some REST API endpoints with near-zero boilerplate.

You’ll need to add those dependencies to your project. Sequelize also needs to know how to communicate with the database. For now, use SQLite as it will get us up and running quickly.

npm install sequelize@4.38.0 epilogue@0.7.1 sqlite3@4.0.2

Create a new file database.js with the following code. I’ll explain each part in more detail below.


const Sequelize = require('sequelize')
const epilogue = require('epilogue')

const database = new Sequelize({
  dialect: 'sqlite',
  storage: './test.sqlite',
  operatorsAliases: false

const Part = database.define('parts', {
  partNumber: Sequelize.STRING,
  modelNumber: Sequelize.STRING,
  name: Sequelize.STRING,
  description: Sequelize.TEXT

const initializeDatabase = async (app) => {
  epilogue.initialize({ app, sequelize: database })

    model: Part,
    endpoints: ['/parts', '/parts/:id']

  await database.sync()

module.exports = initializeDatabase

Now you just need to import that file into your main app and run the initialization function. Make the following additions to your index.js file.


@@ -2,10 +2,14 @@ const express = require('express')
 const bodyParser = require('body-parser')
 const { promisify } = require('util')

+const initializeDatabase = require('./database')
 const app = express()

 const startServer = async () => {
+  await initializeDatabase(app)
   const port = process.env.SERVER_PORT || 3000
   await promisify(app.listen).bind(app)(port)
   console.log(`Listening on port ${port}`)

You can now test for syntax errors and run the app if everything seems good:

$ npm test && node .

> rest-api@1.0.0 test /Users/bmk/code/okta/apps/rest-api
> standard

Executing (default): CREATE TABLE IF NOT EXISTS `parts` (`id` INTEGER PRIMARY KEY AUTOINCREMENT, `partNumber` VARCHAR(255), `modelNu
mber` VARCHAR(255), `name` VARCHAR(255), `description` TEXT, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
Executing (default): PRAGMA INDEX_LIST(`parts`)
Listening on port 3000

In another terminal, you can test that this is actually working (to format the JSON response I use a json CLI, installed globally using npm install --global json):

$ curl localhost:3000/parts

$ curl localhost:3000/parts -X POST -d '{
  "partNumber": "abc-123",
  "modelNumber": "xyz-789",
  "name": "Alphabet Soup",
  "description": "Soup with letters and numbers in it"
}' -H 'content-type: application/json' -s0 | json
  "id": 1,
  "partNumber": "abc-123",
  "modelNumber": "xyz-789",
  "name": "Alphabet Soup",
  "description": "Soup with letters and numbers in it",
  "updatedAt": "2018-08-16T02:22:09.446Z",
  "createdAt": "2018-08-16T02:22:09.446Z"

$ curl localhost:3000/parts -s0 | json
    "id": 1,
    "partNumber": "abc-123",
    "modelNumber": "xyz-789",
    "name": "Alphabet Soup",
    "description": "Soup with letters and numbers in it",
    "createdAt": "2018-08-16T02:22:09.446Z",
    "updatedAt": "2018-08-16T02:22:09.446Z"

What’s Going On Here?

Feel free to skip this section if you followed along with all that, but I did promise an explanation.

The Sequelize function creates a database. This is where you configure details, such as what dialect of SQL to use. For now, use SQLite to get up and running quickly.

const database = new Sequelize({
  dialect: 'sqlite',
  storage: './test.sqlite',
  operatorsAliases: false

Once you’ve created the database, you can define the schema for it using database.define for each table. Create a table called parts with a few useful fields to keep track of parts. By default, Sequelize also automatically creates and updates id, createdAt, and updatedAt fields when you create or update a row.

const Part = database.define('parts', {
  partNumber: Sequelize.STRING,
  modelNumber: Sequelize.STRING,
  name: Sequelize.STRING,
  description: Sequelize.TEXT

Epilogue requires access to your Express app in order to add endpoints. However, app is defined in another file. One way to deal with this is to export a function that takes the app and does something with it. In the other file when we import this script, you would run it like initializeDatabase(app).

Epilogue needs to initialize with both the app and the database. You then define which REST endpoints you would like to use. The resource function will include endpoints for the GET, POST, PUT, and DELETE verbs, mostly automagically.

To actually create the database, you need to run database.sync(), which returns a Promise. You’ll want to wait until it’s finished before starting your server.

The module.exports command says that the initializeDatabase function can be imported from another file.

const initializeDatabase = async (app) => {
  epilogue.initialize({ app, sequelize: database })

    model: Part,
    endpoints: ['/parts', '/parts/:id']

  await database.sync()

module.exports = initializeDatabase

Secure Your Node + Express REST API with OAuth 2.0

Now that you have a REST API up and running, imagine you’d like a specific application to use this from a remote location. If you host this on the internet as is, then anybody can add, modify, or remove parts at their will.

To avoid this, you can use the OAuth 2.0 Client Credentials Flow. This is a way of letting two servers communicate with each other, without the context of a user. The two servers must agree ahead of time to use a third-party authorization server. Assume there are two servers, A and B, and an authorization server. Server A is hosting the REST API, and Server B would like to access the API.

  • Server B sends a secret key to the authorization server to prove who they are and asks for a temporary token.
  • Server B then consumes the REST API as usual but sends the token along with the request.
  • Server A asks the authorization server for some metadata that can be used to verify tokens.
  • Server A verifies the Server B’s request.
    • If it’s valid, a successful response is sent and Server B is happy.
    • If the token is invalid, an error message is sent instead, and no sensitive information is leaked.

Create an Authorization Server

This is where Okta comes into play. Okta can act as an authorization server to allow you to secure your data. You’re probably asking yourself “Why Okta? Well, it’s pretty cool to build a REST app, but it’s even cooler to build a secure one. To achieve that, you’ll want to add authentication so users have to log in before viewing/modifying groups. At Okta, our goal is to make identity management a lot easier, more secure, and more scalable than what you’re used to. Okta is a cloud service that allows developers to create, edit, and securely store user accounts and user account data, and connect them with one or multiple applications. Our API enables you to:

If you don’t already have one, sign up for a forever-free developer account, and let’s get started!

After creating your account, log in to your developer console, navigate to API, then to the Authorization Servers tab. Click on the link to your default server.

From this Settings tab, copy the Issuer field. You’ll need to save this somewhere that your Node app can read. In your project, create a file named .env that looks like this:



The value for ISSUER should be the value from the Settings page’s Issuer URI field.

Higlighting the issuer URL.

Note: As a general rule, you should not store this .env file in source control. This allows multiple projects to use the same source code without needing a separate fork. It also makes sure that your secure information is not public (especially if you’re publishing your code as open source).

Next, navigate to the Scopes tab. Click the Add Scope button and create a scope for your REST API. You’ll need to give it a name (e.g. parts_manager) and you can give it a description if you like.

Add scope screenshot.

You should add the scope name to your .env file as well so your code can access it.



Now you need to create a client. Navigate to Applications, then click Add Application. Select Service, then click Next. Enter a name for your service, (e.g. Parts Manager), then click Done.

The post Build a Simple REST API with Node and OAuth 2.0 appeared first on SitePoint.

Source link

Representing Web Developers In The W3C

Representing Web Developers In The W3C

Representing Web Developers In The W3C

Rachel Andrew


One of the many things that I do is to be a part of the CSS Working Group as an Invited Expert. Invited Experts are people who the group wants to be part of the group, but who do not work for a member organization which would confer upon their membership. In this post, I explain a little bit about what I feel my role is in the Working Group, as a way to announce a possible change to my involvement with the support of the Dutch organization, Fronteers.

I’ve always seen my involvement in the CSS Working Group as a two-way thing. I ferry information from the Working Group to authors (folks who are web developers, designers, and people who use CSS for print or EPUB) and from authors to the Working Group. Once I understand a discussion that is happening around a specification which would benefit from author input, I can explain it to authors in a way that doesn’t require detailed knowledge of CSS specifications or browser internals.

This was the motivation behind all of the work I did to explain Grid Layout before it landed in browsers. It is work I continue, for example, my recent article here on Smashing Magazine on Grid Level 2 and subgrid. While I think that far more web developers are capable of understanding the specifications than they often give themselves credit for, I get that people have other priorities! If I can distill and share the most important points, then perhaps we can get more feedback into the group at a point when it can make a difference.

There is something I have discovered while constantly unpacking these subjects in articles and on stage. While I can directly ask people for their opinion — and sometimes I do — the answers to those direct questions are most often the obvious ones. People are put on the spot; they feel they should have an opinion and so give the first answer they think of. Even with they’re in an A or B choice about a subject (when asked to vote), they may not be in a place to fully consider all of the implications.

If I write or talk about a subject, however, I don’t get requests for CSS features. I get questions. Some of those I can answer and I make a note to perhaps better explain that point in future. Some of those questions I cannot answer because CSS doesn’t yet have an answer. I am constantly searching for those unanswered questions, for that is where the future of CSS is. By being a web developer who also happens to work on CSS, I’m in a perfect place to have those conversations and to try to take them back with me to the Working Group when relevant things are being discussed, and so we need to know what authors think.

To do this sort of work, you need to be able to explain things well and to have a nerdy interest in specifications. I’m not the only person on the planet who has these attributes. However, to do this sort of work as an Invited Expert to the CSS Working Group requires something else; it requires you to give up a lot of your time and be able to spend a lot of your own money. There is no funding for Invited Experts. A W3C Invited Expert is a volunteer, attending weekly meetings, traveling for in-person meetings, spending time responding to issues on GitHub, chatting to authors, or even editing specifications and writing tests. This is all volunteer work. As an independent — sat at a CSS Working Group meeting — I know that while practically every other person sat around that table is being paid to be there — as they work for a browser vendor or another company with an interest — I’m not. You have to care very deeply, and have a very understanding family for that to be at all sustainable.

It is this practical point which makes it hard for there to be more people like me involved in this kind of work — in the way that I’m involved — as an independent voice for authors. To actually be paid to work on this stuff usually means becoming employed by a browser vendor, and while there is nothing wrong with that it changes the dynamic. I would then be Rachel Andrew from Microsoft/Google/Mozilla. Who would I be speaking for? Could I remain embedded in the web community if I was no longer a web developer myself? It’s for this reason I was very interested when representatives from Fronteers approached me earlier this year.

Fronteers are an amazing organization of Dutch web developers. One of my first international speaking engagements was to go to Amsterdam to speak at one of their meetups. I was immediately struck by the hugely knowledgeable community in Amsterdam. If I am invited to speak at a front-end event in the Netherlands, I know I can take my nerdiest and most detailed talks along with me; the community there will already know the basics and be excited to hear the details.

Anneke Sinnema (Chair of Fronteers) and Peter-Paul Koch (Founder) approached me with an idea they had about their organization becoming a Member of the W3C, which would then entitle them to representation within the W3C. They wanted to know if I would be interested in becoming their first representative — a move that would make me an official representative for the web development community as well as give me a stipend in order that I would have some paid hours covered to do that work. This plan needs to be voted upon my Fronteers members, so may or may not come to fruition. However, we all hope it will, and not just for me but as a possible start to a movement which sees more people like myself involved in the work of creating the web platform.

My post is one of a few being published today to announce this as an idea. For more information on the thoughts behind this idea, read “Web Developer Representation In W3C Is Here” on A List Apart. Dutch speakers can also find a post on the Fronteers blog.

Smashing Editorial

Source link

Amazon Announces Alexa Presentation Language Preview and New Alexa Skills Kit SDK Frameworks for Java

Amazon has announced a preview of its new Alexa Presentation Language (APL) which allows developers to build visual Alexa skills that are voice-first and can be customized for different types of devices.

Source link