Social Media Platform Fundamentals: Safety

Jacob Star
11 min readJun 12, 2021

Warning: In this series, I’m going to be butchering concepts in UX/UI and community management for the sake of presenting clear parameters to a general audience.
Further reading: Intro | Leadership| Structure| Growth

So you want to start a social media platform?

Great! So do I — but instead of spending too much time trying to learn yet another language or framework myself, I’m going to teach you some of the lessons I’ve tried to tackle in building simple prototypes of all kinds of websites.

This second entry is about how the shape of your platform affects the safety, security, and privacy of your users.

Social media as it stands has a well-discussed safety problem by the public-first nature of sites like Twitter. The solutions in the interface have to be paired with serious considerations about how the core product is structured.

But please, don’t confuse this with articles about “Privacy UX” going around that are basically business-minded “tips and tricks” for compliance. Those only scratch the surface for what it means to be privacy-first.

The State of Things Today

I’ve talked about social media from a few interrelated angles in the past.

You may want to supplement this entry with “Communities are Molded by the Shape of their Platform”. It’s an anecdotal rant about how platforms and apps in the past have hurt people through ignoring their own limitations and discouraging practices that can exploit them.

I’ll be talking about some of those mentioned platforms here in a more neutral tone, but do have a read to get my own opinions.

Needless to say, those opinions are not without merit, nor without commonality. I’m far from the only person to have shared them, and in fact, I’ve learned a lot from more successful people sharing their own experiences and more scholarly research on the subject.

One other thing you may find as I talk about this is that the word “platform” is pretty well conflated with terms like “app”, “service”, “community”, and even “software” in both common and industry talk, to the point that distinctions are quite meaningless in cases like this.

So while a traditional “social media” platform like Facebook, a chatroom service like Discord, a blog running on a CMS like WordPress, a job search site like Indeed, and even a fitness app on your watch are all very different in purpose, their potential to share something digital of yours with someone else is inherently common enough that they warrant shared inclusion.

Safety is, after all, part and parcel with allowing any means for one person to see something of another person, or to interact with them. Tom Scott calls it the “bitter ex test” — can someone with an agenda hurt you with information made open about you?

Your Most Vulnerable Users

Following from the setup in our introduction entry, the question of who your platform is for should be pretty clear: every platform should be for its most vulnerable users first, and its privileged users will benefit by proxy.

The question of defining “vulnerable” and “privileged” is broad and depends on the categorical topic, but in this case, we’re talking about people who are most prone to be targeted and persecuted in the existing community your platform is entering.

In general, you can expect people of color, LGBT people, neurodivergent people, disabled people, refugees, and victims of abuse to all have their own overlapping needs for safety. Their greatest needs are not necessarily exclusive to them as individual demographics, but listening to them first will make a huge difference in how your site is able to ward off toxicity and keep a good reputation longer.

This includes everything from simple accessibility to fully-developed policy enforcement.

Social media in the mainstream, however, is influenced by long-standing pillars of the tech industry, which allow both Silicon Valley and the FOSS community to co-exist. These include “free market” capitalist practices, but they also include a conflation between “free (gratis) software” and “free (libre) speech” that translates to “we prefer to have a public space where we can say whatever the f*** we want.”

The fundamental problem with the “free speech first” attitude is that by promising to open the floodgates, it forces a platform to take a lowest-common-denominator approach to safety at launch, which effectively shuts out any bargaining power these vulnerable demographics have from there. It lets the people with the least qualms about safety set the tone, which is how we get alt-right havens.

Now that you’re convinced that you need to listen, what might you be expected to hear?

Axes of Access

Every website is different, and social media platforms are no exception. But generally, you can sort them out by a few core traits that relate directly to user safety. Each trait isn’t a value judgement on its own, but you’ll do well to steer clear of a few deadly combinations.

To make the questions easier to understand, one technique I like to employ is a multi-dimensional graph. With each axis, each question is mapped to an approximate point between two extremes.

The more dimensions you have, of course, the harder it is to see mapped to your 2-dimensional vision, so they can be expanded into a row of sliding scales.

Left: An empty three-dimensional graph with each axis color-coded. Right: Four sliding scales of access (from private to public), visibility (from invisible to opaque), prominence (from observer to peer to influencer), and data (from anonymous to meticulous).

One thing to keep in mind as well — if your platform allows users to create their own spaces within them, then any power you give them can change the position on these scales. In some cases, these overriding changes can either add to the safety… or create a gaping hole.

Access to Content

The first question is whether your platform is open to the public, or walled off.

It’s not a strict binary, of course. Being able to access content is often done on a case-by-case basis, but platforms can generally have a predictable range.

Putting a gate right at the home page is one extreme measure, but this encompasses everything from a typical CMS user system, to a paywall for certain posts, to a user-level option to make some of their posts private.

However, there are some systems that have access control a core feature. Discord is unique among even chatrooms. Access to a server centers around invite codes — effectively passwords — and control rests entirely in whoever hands out those codes. A server is made public by distributing a code to the public, and made private again by revoking it. The only people with guaranteed access are the server owner… and the Trust & Safety team.

Contrast this with an imageboard, like 4chan. 4chan does not lock off any posts, and in fact does not provide any special access privileges beyond admin/”janitor” users and a simple CAPTCHA gate for posting. This isn’t the whole story with them, but this is probably the simplest commonly-known example of open access with interactivity.

Access: close to fully public. Prominence: a short range pivoted around Peer, in the center. Data: close to fully anonymous.
A representation of 4chan on three axes

Beyond that is the realm of static websites — what we’d now call Web 1.0 pages — and basic file system protocols like Git and FTP. That’s straying off-topic from social media, though.

User Influence

This question is more complicated — how much prominence does an individual user have on the site and on fellow users?

It’s complicated because the answer can be sliced up into as many layers as there are types of users on the platform. And as we’ll soon see, prominence is relative — one post can be both highly prominent in its own space, and less prominent within a larger curated list.

As a baseline, however, we’ll start with a first-time user — no account, no history, just curiosity.

A typical blog running WordPress might opt to use its built-in comment system. With that, all you need is a name, email address, and message to deface a blog post with your prose.

Filters and extra measures aside, comments are a simple, low-impact way to interact with someone. However, it creates two different bars of influence — high influence from the poster, and low influence from the commenter.

Post author prominence: from peer to near-influencer, depending on the nature of the blog. Commenter prominence: from total observer to about halfway to peer, depending on comment settings on the post and blog.
Comparing a post’s author and its commenters on a typical WordPress post

The simple hierarchy made by the presence of an OP and its replies creates a whole dynamic with tons of nuances… before any of it is given context through medium, content, or further degrees of prominence. YouTube videos, forum threads, and tweets all have some degree of this, but they can also simulatenously represent examples a peer dynamic.

“Peers” in this case means users able to post with equal prominence. Those same videos, threads, and tweets are prominent within their poster’s space, but on a browse or search page, their prominence is more equalized by necessity.

Of course, there’s no perfectly equal prominence in a platform, especially once content exceeds the space of a single page. The main ways of discovering YouTube videos — searching and being recommended — are mediated by an algorithm designed to sort videos by popularity and relevance to a watcher. The largest existential problems with YouTube center around faults in their algorithm as implemented.

On the other hand, a browsing user can control prominence simply by choosing to follow certain other users, allowing the site to display a more narrow and reasonable mix of content to their liking.

However, those same existential problems will exist for any large social platform, and if money is on the line there as well, those issues will need to be addressed from day one.

Data Collection

If you’re posting on a website that isn’t your own static web host, then the site owner will want to collect at least some personal data from you.

It could be something as simple as an IP address, a unique identifier, and an email address. It could be as complicated as collecting your entire life history — vital records, private records, and every government-issued number to your name — platforms for medical, labor, and financial purposes may collect them for a core purpose you’ve opted into.

Job seeker visibility: from fully invisible to near-invisible. Job seeker data: small range pivoted around a quarter to fully meticulous.
Representing a typical job search site, from the perspective of a job seeker.

Ideally, of course, it’s in your best interest to only give it to sites you trust to keep it secure and narrowly used. However, it’s a fact that mainstream social media is kept afloat with the promise of data harvesting, and platforms like Facebook and LinkedIn all but encourage you to share seemingly-innocuous details like your location, contact info, and interests.

A balance may be hard to strike, but there are some barriers and concessions that have held sway over time.

For instance, a platform that pays its creators will have to collect tax and deposit info, and you have to trust that they’ll use it as minimally as possible. Your money and your 1099 will need to go somewhere.

On the other hand, the platform will also have to navigate rules set out by laws, including child privacy laws. The COPPA compromise made by YouTube opened up a whole debate over the best ways to run a website catering to both young and mature audiences. If you’re daring enough to try the same, the risks and penalties for misstepping are much higher, but many companies will say it’s worth it.

Finally, there’s all the ethical questions around data collection, and those are pervasive enough that we can address them in full later on.

Visibility

Putting it all together, the three questions meet up and are judged against this final consideration — are users actually able to notice and interact with each other?

As mentioned with data collection, solo-experience platforms exist, where individuals have limited interaction with a pool of entities. Job search sites, medical apps, online courses — they all restrict lateral visibility between peer users, and focus more on the viewer-poster relationship.

Further up the line, you get simple blog comments again, forum users, all the way up to the fully-decked-out profiles on Facebook.

Facebook actually has very complicated settings to control visibility of profiles and their details. However, they’re not very well promoted, and that leads into the next topic:

Opt-In Spaces

By joining a space owned by another user, you’re subject to the whims of that owner. But by running a space, you’re subject to the responsibility of maintaining the integrity of the platform within that space.

Logging into an account is an opt-in — a consent of tracking between the user and the site— to the platform and its public spaces.

Joining a private space, similarly, is an opt-in to a further layer of access that is likely trusted. As in, you as the owner should trust that everyone who joins does so in good faith, and the platform may or may not have your back in all cases.

This is where the scales of privacy above come in — you want to make it clear to your core users about the intended use of your platform, warn them about dangerous practices that stretch its limits, and also disincentivize those practices where you’re less able to make up for them.

However, there are other opt-in situations that are less obvious, but they can affect a user’s experience even more personally. For instance — privacy settings.

Every setting that locks off part of a profile, or restricts consent to reply or direct message, or blocks a user or tag or phrase, is an opportunity for a user to curate and de-stress their daily content feed. If they are sensitive to certain topics and your platform includes a tagging system, that’s a benefit.

If you’re reading this and thinking: Hold up a second. Aren’t you encouraging an echo chamber?, then fear not.

True Curation

Another reason to bring up your most vulnerable users is that if your platform is built on content, then you have an opportunity to elevate new and otherwise unseen creators with a true curation system.

True curation. Not “smart” curation — ie. algorithm-first — and not “perfect” curation.

Not every platform is going to be run explicitly to serve its own in-house content, like a streaming service. And that’s okay — other platforms with nobler goals are just as valid. But the larger it gets, and the more diverse its userbase gets, the more curation needs to be taken into consideration.

We’ll talk more about the long-term implications another time, but for now, consider the fact that user safety is often affected by how empowered they are to post content and share in the conversation. And that means for your core users, their work is spread fairly to the audience that deserves to see it.

Of course, part of the job of curation will entail addressing hate speech, bad-faith arguments, and harmful content of all kinds.

Being proactive about this flipside of curation may gain you some enemies who claim “censorship”, but if you’ve read this far and are totally on board, then you’re likely already past their whinging and eager to prove them wrong.

However, you also likely know that it will be an uphill battle to make sure curation and moderation are done effectively. Platforms in the past have risen and fallen with their reputations hinging on how well they were run.

If you want to take the next step, then stay tuned for the next part in our series: Leadership.

--

--

Jacob Star

Creator Catalyst. Media Generalist. Caption+ / Pop History.