Improve your bounty campaign efficiency

2018-03-13 21:31:42

Ditch the spreadsheet and use an automized software

We’re no rookies when it comes to bounty marketing campaigns. Everyone in our team at Bethereum is familiar with how token sales are preceded by extensive marketing campaigns that are financed through the distribution of so-called bounty tokens. Most of us have even participated in several bounty campaigns, not just to receive free tokens, but also to learn more about the business and to examine current trends.

What we saw did not impress us much.

Firstly, it is worth mentioning that most bounty campaigns are surprisingly low-tech. The solution for many multi-million blockchain projects comes in the form of a shared google document or online spreadsheet, either manually filled out or using simple scripts to record user activity.

Second, the low-tech nature of bounty campaigns creates a misguided incentive for users to engage in unsuitable marketing efforts. Over and over, we’ve seen bounty campaigns that see frenetic activity just days before the token sale. In such cases, the majority of bounties are unfairly distributed towards individuals who use any means necessary in the span of a few days just before the token sale, while the consistency and loyalty of the rest goes unrewarded.

Lastly, bounty campaigns are often biased in favor of programmers, developers and other IT professionals who know how to game the system. More casual users are dissuaded from bounty campaigns due to their complicated or unintuitive nature. While its nice to get so much attention from people within our field of work, it is even better to captivate a broad audience including more casual individuals who may be our future end-users.

With these thoughts in mind, we created a bold vision for a bounty marketing campaign that would break the conventional mould, programming a competitive leaderboard that would automatically register useful social media activity, rewarding points and ranking users accordingly. The final distribution of bounties will be based upon each user’s rank on the leaderboard on the eve of our token sale.

This solves the issue of inconsistency, as users are required to participate each day, with a daily cap limiting the amount of points they can gain and thus reducing the incentive to post unnecessary spam.

By programming our own bounty distribution software, we have created a very simple and easy-to-use bounty program that everyone can participate in equally. You don’t have to be a crypto-geek to gain Bether tokens in our campaign — all we ask for is consistency and moderation.

We took a leap of faith and live-tested the campaign. How did it go? Well, the numbers speak for themselves. Over the first two days of our bounty campaign’s launch, we recorded:

  • + 23,000 Telegram members
  • + 12,000 Twitter followers
  • +400,000 Twitter impressions and top trending posts on all major crypto Twitter hashtags
  • +6,000 Facebook page likes
  • +25,000 Sign ups on our Bounty website
  • +47,000 New visitors to our Website
  • 10 million Bether token bounty pool already reached.

Actually, we’ve had too much success and have had to limit the total amount of participants, limiting registrations after we reached 45,000 in just a week after launching the campaign.

We hit practically every limit on Telegram, Facebook and Twitter. In addition to all this traffic, we were happy to see that all participants enjoyed the intuitive rules and interface:

Bitcointalk supporter
Facebook page review
Facebook page review
Twitter comment

As a way of expressing gratitude to our bounty hunters, we added another 4.5 million tokens to be rewarded for our top 6,000 participants. To make it more interesting, one lucky hunter ranked in the top 6,000 will win the grand prize of 500,000 Bether tokens!

Now here’s the kicker: we achieved this with no marketing efforts on our part. Outside of delegating 14.5 million Bether tokens, we invested zero into advertising our upcoming bounty campaign. And still, people flocked to us in droves, which is reflective of the fact that bounty campaigns can be done much more effectively, with mutual benefits to us and our bounty hunters.

There’s still much to be learned from the process of making good bounty campaigns, and we hope that by leading through an example of custom-made bounty software, we’re going to inspire other projects to ditch the spreadsheet and put some work in.


Improve your bounty campaign efficiency was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Is it a Value Object or an Entity?

2018-03-13 19:06:01

When things are not clear, you only need to ask the right questions.

So many Value Objects! Or are they Entities?

Imagine you enter a library to borrow a book.

What would you typically do is to ask for the book by its title and author.

The library employee checks the inventory and finds two available physical copies of the book.

Can we say that those two copies represent the same thing? Yes and no.

For you as a library customer, whether the employee would give you one or the other of the available copies won’t make any difference.

In your perspective, the physical copy is a Value Object.

But as a library employee, having two copies of the same book is a complete different story. She needs to know exactly when each copy was acquired, to whom it was lended, in which bookshelf is it stored.

In her perspective, the single copy of the book is an Entity.

Every physical book copy which is acquired by the library is labeled with an entry number. The entry number is the property by which a library employee can univocally identify a physical copy.

As a customer, do you care about the entry number? Probably not.

Keeping this insight in mind, the concept of book can be modeled to specifically reflect this double perspective:

Value Object or Entity. How to choose?

Ask yourself the following questions:

  • Is there any difference if I swap two objects with the same properties?
  • In which part of the domain am I?
  • From which point of view am I building this part of the model?
  • In general, how many actors are looking at the object?
  • How many different perspectives I can find?

When you have only one use case, the swapping trick is an easy one.

You should apply it in advance in order to understand if something is a Value Object or an Entity.

When there are multiple perspectives, though, things get more nuanced and you must go deeper in the understanding of your domain.

In this case, the common error that you must avoid is to define a single class modeled as an Entity.

You can recognize that kind of classes because they are stuffed it with all possible behaviors. For every use case. Performed by any actor in the game.

Refrain from modeling something like that.

Doing otherwise would be like lying. You will later see that something feels wrong in your code. That something doesn’t belong where it is.

Instead, use the opportunity to gain insights about the multiplicity of your model.

Create as many classes as many point of views of the object.

Prefer decoupled duplication to coupled normalization.

In other words, acknowledge the complexity of your domain and show it in all its glory inside your codebase.

Ready to become a better developer?

I’ve created a developer toolkit for you. Download it for free and start learning how to write code you will be proud of. If you constantly apply those techniques, you’ll become a top software developer.

Get the free developer toolkit here!

If you enjoyed this post, please click the 👏 button and share to help others find it! Feel free to leave a comment below.

You can also subscribe to my Software Development newsletter here:

Is it a Value Object or an Entity? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Put your chatbot where your headless CMS is

2018-03-13 19:01:01

How to make a chatbot in Slack with Sanity, Webtask and Dialogflow

Make intents for chatbots and conversational UIs a part of your content management system

Heads up! You’re required some knowledge of JavaScript to do this tutorial, but it may still be interesting for how we think about integrating chatbots into a CMS.

The idea of a headless content management system is to detach your content from the constraints of web pages, in order to reuse it in many contexts. This makes sense even when you only want to display your content on a webpage, because you can structure it in a way where content can be reused across many pages, and more easily switch your frontend code when something more fancy comes aroung. The real power of headless, however, comes when you manage to reuse your content in different interfaces.

Chatbots have been part of the tech buzz for a while now, and it seems that demand is increasing while the tools and AI models become more refined. Google’s Dialogflow just launched support for my native language, Norwegian, which proved a good excuse for me to do some experimentation. I have been pondering for a while how we could implement chatbot-responses with the headless CMS we use at Netlife, which is Sanity (read more about why we chose Sanity here). I think I found a pattern that is easy to implement and maintain.

In sum, you’ll need to:

  1. Set up an agent in Dialogflow
  2. Make a custom app in Slack, and connect it to Dialogflow
  3. Add some intent and fullfillments schemas in Sanity
  4. Connect Sanity and Dialogflow with a serverless service, in this case

I. Make an agent and an intent in Dialogflow

Once logged into Dialogflow, choose Create new agent and give it a name and choose appropriate settings (I chose the V2 API). In my case I wanted to make a chatbot that could connect our company’s intranet with Slack. I named it after our Chief of Staff (a role which, granted, never can be fully automated).

Once you’ve made a new agent, go to Intents and choose the Create Intent button. You can think of an “intent” as “a certain thing that a user would want to do or have answered”. My intent was to get an answer about how we in Netlife book travel. Give the intent a descriptive name; we’ll use this name in Sanity to map the correct content. Fill out different training phrases, which are examples of what your users would write or say(!) in order to request the said information. In this case, it’s variations on “how do I book travel” and so on. Hopefully you’ll won’t need to enter many alternatives before Google’s machine learning algorithms are able to route the user to this intent. You can test how well it works in the right hand sidebar.

Set up an intent in Dialogflow, test it in the right hand sidebar. Here I have set up alternatives in Norwegian for the question “how do I book travel”

You can write out the different possible answers for this intent in Dialogflow’s Responses-sections, but where’s the fun in that? Instead, turn on Enable webhook call for this intent. This will make Dialogflow post a request to whatever URL you put in the Fullfillment section. We will return to this when we set up our microservice in

II. Make an custom app in Slack and connect it to Dialogflow

If you go to Integrations in the left sidebar in Dialogflow, you’ll discover that it can integrate with many different services. The setup will be pretty similar with most of them, but we want Slack. Follow the instructions in Settings in the Slack box closely. And by “closely” I mean that you should take your time to read the instructions and try to understand them.

Make sure that you give the Slack bot the necessary event subscriptions.

Your Slack-bot will need both some authentication and event subscriptions in order to be able to read your queries in Slack. You could have it listen to all conservations, but I prefer it to only answer direct or @-mention messages. Partly because I don’t want the bot to accidently trigger in mid conversation, and partly because I don’t want to feed Dialogflow every line of conversation in our Slack if there is not a very good reason to.

III. Add some schemas for intents and fullfillments in Sanity

If you are not familiar with Sanity yet, go try it out and be back here in fifteen minutes. The content schemas (i.e document types and input fields) in Sanity are written as JavaScript objects with some simple convetions. We’re going to make a pretty simple setup by creating a type for Intents and adding a content field for Fullfillments in our intranet-post-type.

In our post type, where we write the articles for our intranett, I added an array field called fullfillments that consists of a simple string field. We could make this more complex in order to support messages for different clients; for example, we could have one for voice interfaces, one for Slack responsens with attachments and one for Facebook messenger templates. This time, we’ll keep it simple and just have some simple text responses do the work.

The intent schema consists of a title that makes it easy to find in Sanity, a intentName that we’ll use to map it to that in Dialogflow and a reference field to the posts that contain the fullfillments to this intent. It makes sense to make intents its own type, because the fullfillments can live in different types.

Don’t let the Norwegian throw you off.

IV. Tie it all together with

Now that we’ve set up Sanity with a intent and a fullfillment, we’re ready to connect it to Dialogflow. This is a case where serverless functions comes in handy. I went with because I had used it before and it has an online editor, but you could easily replicate this with either AWS Lambda, stdlib, Google Cloud Functions or any server(less) technology to your taste. In any case the core of the function will be very similar to this gist.

Example of the query in work using the Vision plugin for Sanity.

This is proof of concept code and could benefit from some more error-handling and such. Webtask isn’t too keen on ES6-syntax either. You should pay attention to line 13 and how we locate the correct intent in Sanity. This is why it’s important to align the intent name in Dialogflow to that (i.e. intentName) in Sanity. In this case I’ve also chosen ouput the fullfillment strings randomly, just to make some variations possible.

If you managed to piece all this together, you should now have a working chatbot in Slack that parses natural langauge for intents with machine learning in Dialogflow, finds the fullfillment texts from your headless CMS via a serverless function that talks to APIs. Now you only need to add Blockchain somewhere in this mix to tick off all the buzzword boxes. Further on, we could also connect intents in Dialogflow directly to Sanity via APIs and so on. There are many ways to advance this.

The Slack chatbot at work: Me: “I want to order travel”. Chatbot: “You order travel through our travel agency”. Amazing technology for mundane uses.

This is of course the technical side of the design project. Now the real work begins, with researching what your co-work uses actually might want to ask the bot, and designing the useful answers wrapped in a personality that your coworkers actually want to engange with. I’d recommend picking up Conversational Design by Erika Hall for starters.

If you try this, or have some comments on my setup, I’d love to have your insights and questions in the comment section!

Put your chatbot where your headless CMS is was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Wireless 2.0 — Integrated Networks on the Blockchain.

2018-03-13 18:56:01

How Orbis implements Wireless mesh network technology

Wireless mesh networks, an emerging technology, may bring the dream of a seamlessly connected world into reality.

You would be forgiven for thinking that wireless mesh networking is just another marketing bullet point for new Wi-Fi routers, a phrase coined to drive up prices without delivering benefits.

But we can avoid being cynical for once: mesh technology does deliver a significant benefit over the regular old Wi-Fi routers we’ve bought in years past and that remain on the market.

Mesh networks are resilient, self-configuring, and efficient. You don’t need to mess with them after often minimal work required to set them up, and they provide arguably the best and highest throughput you can achieve in your home. These advantages have led to several startups and existing companies introducing mesh systems contending for the home and small business Wi-Fi networking dollar.

Mesh networks solve a particular problem: covering a relatively large area, more than about 1,000 square feet on a single floor, or a multi-floor dwelling or office, especially where there’s no Ethernet already present to allow easier wired connections of non-mesh Wi-Fi routers and wireless access points.

All the current mesh ecosystems also offer simplicity.

You might pull out great tufts of hair working with the web-based administration control panels on even the most popular conventional Wi-Fi routers.

In outdoor wireless networking, wireless mesh networks are the third topology after point-to-point and point-to-multipoint in order to build a wireless network infrastructure. Each device in a wireless mesh network is typically called a mesh node and is connected with multiple other mesh nodes at the same time.

Wireless mesh networks are also multi hop networks because each mesh node can reach another node going through multiple hops and leveraging other nodes as repeaters. The major advantage of a wireless mesh networks is the intrinsic redundancy and, consequently, reliability because a mesh network is able to reroute traffic through multiple paths to cope with link failures, interference, power failures or network device failures.

Point-to-point, Point-to-Multipoint and Mesh Networks

Two types of wireless mesh networks are usually implemented for commercial and government applications:

  • Unstructured or omni-directional wireless mesh networks
  • Structured wireless mesh networks

In an unstructured wireless mesh network, each mesh node typically uses an omni-directional antenna and is able to communicate with all the other mesh nodes that are within the transmission range. Wireless links in an unstructured wireless mesh network are not planned and link availability is not always guaranteed.

Depending on the density of the mesh network, there may be many different links available to other mesh nodes or none at all. Unstructured mesh networks are usually implemented with non-line of sight radios (NLOS) using low frequency and low bandwidth radios operating, for example, in the UHF bands, such as 400 MHz or in the license-free band at 900 MHz.

Unstructured wireless mesh networks leverage one single channel shared by all the radios. Therefore, the higher the number of hops a transmission requires, the lower the overall throughput of the network will be.

Structured wireless mesh networks are planned networks typically implemented using multiple radios at each node location and multiple directional antennas. A ring topology using multiple directional wireless links is commonly used in a structured wireless mesh network to enable each radio to seamlessly reroute traffic through different paths in the event of node or link failures.

Structured wireless mesh networks are often used for mission-critical applications such as wireless video surveillance, public safety, and industrial automation.

They provide the ideal network architecture in case a site requires a highly reliable and available wireless network for a broadband application such as video, voice and data streaming. Each link in a structured wireless mesh network operates on an independent channel and, therefore, the number of hops for a specific transmission does not affect the overall throughput of the network.

Wireless mesh networks have been studied for many years in academia since the early ’90s, initially mainly with military applications in mind, and then they started to get significant commercial traction between 2005 and 2010.

Mesh networks in the world.

Bluetooth technology, the global standard for simple, secure wireless connectivity, now supports mesh networking. The new mesh capability enables many-to-many (m:m) device communications and is optimized for creating large-scale device networks.

It is ideally suited for building automation, sensor networks, and other IoT solutions where tens, hundreds, or thousands of devices need to reliably and securely communicate with one another.

According to networking expert John Shepler, in the near future the Wi-Fi card in your laptop might become an access point in addition to its normal role as network client. In a full mesh topology, every node communicates with every other node, not just back and forth to a central router. In another variation, called a partial mesh network, nodes communicate with all nearby nodes, but not distant nodes. All communications are between the clients and the access point servers. The client/server relationship is the basis for this technology.

The pros and cons

Bluetooth Mesh Network has several advantages and application in this decade. Some of them us enlisted here:

  • a wireless connection between laptop/pc and smart devices.
  • a wireless connection between various peripheral(mouse, keyboard etc.) and devices(radio, audio speaker, displays etc.) and mobile phones.
  • transfer of files, images, and MP3 between mobile phones.
  • Data logging equipment that transmits data to a computer.
  • Smart homes
  • Commercial IoT application
  • Smart manufacturing
  • Peer networking

But these applications are defined within certain ranges. For example, we can not use transfer data or files between the devices that are located at distance of a mile. A powerful bluetooth mesh network can be created and by giving its access globally, we can utilize it in a variety of application including messaging.

And hence, Individual consumers and developers remain locked out of this lucrative market.

And that’s where Orbis comes into the picture.

What is Orbis?

The Orbis platform aims to establish a platform for both consumer and commercial developments in Bluetooth mesh by establishing pre-existing infrastructure and network for developers to deploy onto. — Jason Chao, Orbis CEO.

Orbis creates multi-purpose and flexible infrastructure for developers to build upon and consumers to utilize delivering this through three components: OrbiStore, OrbisWeb, and OrbisToken (OBT). Orbis has applications in IoT development, crowd-sourced networking, and systems integrations.

How does Orbis Work?

Let’s take a simple application, for example, messaging. Messaging however only stands in for the general transmission of data over mesh networks.

After downloading the OrbisWeb app for your IOS or Android device click “connect” and your phone is now a part of a bluetooth network, a node in OrbisWeb.

Then you can proceed to download an app, perhaps Bluetooth messaging from the OrbiStore.

Using the app, your message is broadcasted to all nodes in the range that then, broadcast to other users, is in kind repeated and relayed. This occurs until your recipient has received your message.

And all the while, as your phone too is relaying others’ data, your wallet will be credited OrbisTokens simply for being part of the network which can then be used to purchase paid apps in the OrbiStore.

The value of such decentralized messaging in the face of seemingly universal SMS service and Wi-Fi is that mesh networks are not susceptible to infrastructure damage such as in the wake of natural disasters nor do they require costly connectivity implementations in low-connectivity places such as subway tunnels, underpasses, or even rural areas.

Analyzing Orbis

Using the metrics I follow, which have been highlighted in —

How to make Money by Trading and Investing in Cryptocurrency

1. Great Team:

The development and progress of project rely on team members. Starting from the architectural overview on a piece of paper to how it can be implemented to market and its duration tends to depend on team and advisors. Orbis has team of very young members with an aim to lead the project idea to partnership with actual companies.

2. Real World Usage:

  • OrisWeb android app alpha version available
  • Not having noticable competitors.
  • Partnered with Ritech Technology (Shenzhen) Co. Ltd.

Individual consumers and developers however remain locked out of this lucrative market. The Orbis platform aims to establish a platform for both consumer and commercial developments in Bluetooth mesh by establishing pre-existing infrastructure and network for developers to deploy onto.

Orbis creates multi-purpose and flexible infrastructure for developers to build upon and consumers to utilize delivering this through three components: OrbiStore, OrbisWeb, and OrbisToken (OBT).

3. Future ideology

OrbisWeb delivers secure, global, decentralized, and open networks of Bluetooth communities that anyone can participate in. OrbisWeb aims to make Bluetooth mesh development easily accessible to third-party developers with implementations such as IoT, crowd-gathered data, digital infrastructure, logistics, and systems management. For consumers, Orbis apps will bring unique functionalities unattainable with conventional WiFi based apps.

OrbiStore is an application platform open to third-party development, whether utilizing BLE network infrastructure or not and developer income will be supplemented with OBT.

Orbis Token (OBT)
Developers are minted new coins based on app usage to coin cap and consumers are minted coins for being active mobile nodes running the OrbisWeb mobile app. OBT is used to purchase paid apps on the OrbiStore and purchase products.

I am pretty stoked for this one. I am VERY bullish on Chinese Blockchain projects, and the NEO ecosystem provides a high energy, high reward platform for growth and usage once mainstream adoption begins. That being said, this token hit home for me and I have participated in the Orbis Airdrop.


Orbis delivers secure, global, decentralized, and open networks of Bluetooth communities that anyone can participate in. Orbis aims to make Bluetooth mesh development easily accessible to third-party developers with implementations such as IoT, crowd-gathered data, digital infrastructure, logistics, and systems management. For consumers, Orbis apps will bring unique functionalities unattainable with conventional WiFi based apps.

What this means, is that you could use Orbis for —

  1. IOT Industry

Smarthomes, crowdsourced data, and automation. Control your lights, shades, and HVAC, manage your supply chain, or see live wait times with the Orbis app.

2. BTC Transactions

Use the Orbis mesh network to send and receive cryptocurrencies while offline. Funds are held in escrow until matching transaction data is uploaded.

3. Defense

Mesh provides reliable offline communication in undeveloped landscapes. Mesh can coordinate soldiers and systems in monitoring and managing the battlefield.

4. Telecommunications

Internet sharing and mesh messaging. Sell your mobile data for OBT and message others via mesh. A robust, cheap, and portable solution for disaster relief.

5. Advertising

Sponsored locations and foot traffic data. Be rewarded in OBT for visiting and connecting to a node in a sponsored location!

Quite interestingly, Orbis also announced their airdrop, which would see recently announced their airdrop of a 1,000,000 tokens, to a maximum number of 60,000 NEO addresses.

Simple math tells you it is around 16.7 OBT tokens per person.

Given the fact that an exchange would list OBT at $0.70, that’s a little more than 11 dollars.

Enough for a free lunch :)

Jokes apart. This token does seem like an interesting Hold, and has a strong set of fundamentals IMHO.

I advocate investing in ecosystem tokens and fundamentally strong tokens.

Get your OBT tokens here.

Cheers and Thank you for reading.


Clap 1 time or 50 times. It helps me gain exposure. Thank you !


Articulating my thoughts from over the years and super stoked to write about Blockchain, trading, cryptocurrency and life.

I aim to bring Cryptocurrencies to the masses in a well refined, easy to understand manner. Being complicated helps none and neither does the biased media.

Yes, I think the system is a massive lie and it is about time to change that.




Disclaimer : This is not a sponsored post and while I have signed up for the Orbis Airdrop — I am not affiliated to Orbis in anyway.

Wireless 2.0 — Integrated Networks on the Blockchain. was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

This Week’s Top Ten Tech Stories

2018-03-13 18:46:30

Online Classes from Skillshare to Build Your Dream Career. First 100 Hacker Noon readers get the first 2 months for just $.99!


I’m in Austin for the first time. Looking forward to their 24 Hour Hackathon starting today. But the internet moves on! More people keep writing great stories! And I’ve curated IMHO the best of the best :-) You can find all this month’s top stories in the March archive, but without further ado…

Here are the top ten tech stories of the week:

How Chris Messina Works, and What’s the Future of AMAs? Chris is internet famous for inventing the hashtag, and his new company,, is more than MDMA’s namesake. It was great to learn more about how he sees the internet and why he does the work he does. Thanks Chris Messina for sharing your tech wisdom, like the quote below:

Picking the low hanging passwords by David Gilbertson. According to a not-at-all recent report by Keeper, there’s a 50/50 chance that any user account can be accessed with one of the 25 most common passwords. And there’s a 17% chance that the password is 123456. This strikes me as absolute rubbish, but it got me thinking, if I want to get unfettered access to some user accounts, and I don’t really care which accounts, rather than using ‘brute force’ by trying many passwords for one user account, it makes much more sense to flip that and try one password on many user accounts.

Read more Software development stories.

The Biggest Content Website You Never Heard Of: And Here’s Why by Rishi Sachdeva. In the entire journey, including the strategy and execution of our business model, we didn’t realise one critical shortcoming: it all relied on a platform that we didn’t own, a platform that was too powerful to let anything thrive on it for free — By virtue of dealing directly with Facebook influencers/pages, we were essentially cutting the platform itself off from the equation.

Read more Social Media stories.

The 4 Layers of Single Page Applications You Need to Know by Daniel Dughila. Every successful project needs a clear architecture, which is understood by all team members.

Read more React stories.

Starsky Robotics Drove a Fully-Driverless Truck (and raised $16.5m from Shasta Ventures) by Stefan Seltz-Axmacher. Starsky Robotics is the only autonomous truck team with a product. I’m thrilled to announce that we drove a truck 7 miles fully unmanned. No safety driver behind the wheel, no engineer hiding on the bunk. We are the first company to make driverless trucks reality. Watch our GoPro footage.

Read more venture capital and self driving vehicle stories.

Blockchain Oracles Will Make Smart Contracts Fly by Doug von Kohorn. At a very high level, using an oracle means receiving data from outside of a blockchain. Said another way, an oracle provides a connection between real world events and a blockchain. In my opinion, all of the really interesting complex smart contracts require outside information — financial derivatives, gambling, stablecoins, identity…literally anything where you want to incorporate something happening in the real world.

Read more about Smart Contracts.

🔥 JavaScript Modules Worth Using 🔥 by Travis Fischer. A quick breakdown of the most useful JavaScript modules that I find myself using over and over again. This is an opinionated article that focuses on general-purpose modules and utilities that I’ve found invaluable to NodeJS and frontend JavaScript development. It won’t be exhaustive or include any special-purpose modules, as those types of awesome lists are indeed awesome but tend to be a bit overwhelming.

Read more Javascript stories.

You Helped Us Raise $2600 for SF Marin Food Bank — $13,000 worth of food by Micha Benoliel. With each download we donated $1 to the SF Marin Food Bank. That $1 allows the Food Bank to provide $5 worth of food due to their ability to purchase in bulk. The $2600 raised could provide $13,000 worth of food thanks to SF Marin food bank or it could mean more much needed storage space. Debbie Bullish, Community Engagement Manager from SF Marin Food Bank, tells us: “Currently the warehouse comes in at an impressive 55,000 square feet. It’s built to hold 28m lbs / year, however we deliver 48m lbs/year, which is equivalent to 100,000 meals/day. We simply don’t have the capacity to run at full throttle and that is a grave shame considering the impact this charity is making.”

Read more Tech for Good and marketing stories.

How to tell a story in the blockchain world? by Mohit Mamoria. Blockchain has allowed for the first time in the entire human history for strangers to collaborate without having first to trust each other. And the implications that it is creating in the industries other than the finances is astonishing. This shift is huge! Who would have thought that the journey on which human species embarked upon thousands of years ago by painting marks on the cave walls would lead up to the day where the future of the entire species would revolve around telling stories.

Read more Blockchain stories.

An Open Letter to Banks about Bitcoin and Cryptocurrencies by Peter McCormack. Dear Mr Bank Manager, This is not an easy letter for me to write. I have been a customer of yours for over 20 years. You were there with a loan for me when I bought my first car; you helped arrange the mortgage when I bought my first house, and you even helped me launch my first business. We have been through so much together. And I’ll let you into a little secret? You were my first! Don’t worry, I know I wasn’t yours. I think this is why this relationship means so much more to me than you.

Read more Open Letters.

And ICYMI From Around the Web

Cryptocurrencies: Last Week Tonight with John Oliver (HBO). This 20 minute video — while a bit introductory — is funny as shit and hits on some of those iconic moments in the crypto movement, like comparing hacking the blockchain to putting a chicken back together from chicken nuggets and this guy’s irrational (and oddly motivational) optimism for BitConnect.

Until next time, don’t take the realities of the world for granted.

Kind Regards,

David SmookeAMI

P.S. Support our sponsor! Over 4M members & 18k+ classes…Skillshare is basically Netflix for online learning. Learn everything from tech and design to marketing, photography, & more. First 100 Hacker Noon readers get the first 2 months for just $.99!

This Week’s Top Ten Tech Stories was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Corporate Man’s Search for Meaning

2018-03-13 18:11:32

Our hero awakes to the sound of tyrannical electronic bleeps and the crushing certainty of another day of wasted potential and squandered dreams. It’s a cruel sting of consciousness that can be soothed only by the escape of a fifteen-minute snooze.

Getting out of bed is somehow an act of both great bravery and extreme cowardice. He is a dutiful slave, a good little boy with just enough mental strength to suppress his rage, numb his emotions and sustain this ‘normal’ existence.

Another night of digital submersion had slipped by, our hero flooding his dopamine receptors until the early hours as he waded in shallow pixelated waters. The bleeps had awoken him two, perhaps three, hours earlier than is healthy from a deep slumber, his only respite from the misery of being. This was not a noble awakening — as might have happened in a simpler, more recognizably brutal time — but a pathetic and demeaning one; a man rising to the meagre challenge of travelling to another building, sitting in a chair and staring at a screen. Somewhere, in the recesses of his mind, he knew this.

Eyes sore and body aching, our hero commences his morning routine. In the bathroom he is confronted by his disheveled image and becomes briefly absorbed in self-examination. Staring back at an expression of deep-rooted apathy and quiet desperation, he contemplates how his ten-year-old self ended up here. He then switches off his thoughts, the defense mechanism of a tortured spirit. Shower, brush teeth, dry off, dress, drink juice, eat toast —each task is executed with a robotic efficiency honed over years of practice.

On the subway platform, he observes the many and different faces of those he will be journeying with, searching for signs of pain, reassurance that he is not alone in his grim interpretation of the situation. He thinks he sees it in some, but cannot be sure. Others are almost totally inscrutable, faintly exuding only the horrifying allusion that, yes, they have somehow become psychologically adapted to this mode of being. Today, like all days, he will find no solace in his attempts at telepathy with his enigmatic fellow man, only more questions about the nature of suffering, free will and the seemingly intentional triumph of modern society in achieving our irreversible atomization.

The train rolls in and our hero — brushing off thoughts that he might someday jump — jostles for standing space on board. Still no eye contact, no words, just the implicit understanding that if everyone stays quiet and acts amicably, the desolate absurdity will pass over everyone with a disaffected painlessness. Most distract themselves with smartphones, necks craned and fingers tapping. Others stare into space, looking terrifyingly contented or otherwise stoic. He is surrounded by people, but excruciatingly lonely.

Our hero scours his depleted mind for the reserves of resolve and patience he will need to once again navigate his involuntary participation in office politics, feign interest in his soul-sapping work and suppress his masculine nature. He wonders if there will ever be an escape from this muffled agony and drudgery, and where it might come from, imagining some abstract, unknowable future place where he has been liberated from the sorrow of this farcical day being repeatedly played out.

Taking his seat and turning on his computer, our hero greets his colleagues with a veneer of friendliness. Like the subway passengers, they too are unfathomable, some even exhibiting the impression of being filled with optimism in anticipation of eight long hours of screen-staring, mouse-moving and keyboard-tapping. This again disturbs our hero, who contemplates this time whether it is in fact he who is broken, needlessly pessimistic, suffering without cause.

By now in an existential pit of despair in which the thought of his meaningless work is too painful to even consider, he turns instead to the welcoming grip of hot tea and the web, anesthetizing the pain with Silicon Valley-engineered digital dopamine rushes like a wounded solider being pumped with morphine. He finds relief in the bitterest, darkest corners of the internet. At least here he gets to feeling something — some outrage over the latest big news story, or some fleeting connection with an anonymous passer-by.

After a couple of hours of blissful avoidance, a trivial delegation from his superior — many times over his intellectual inferior yet through vacuous ‘network building’ and brown-nosing reaching far higher up the corporate-social chain — brings him sharply, mercilessly back to reality. He retunes his frayed senses to the atmosphere of the room. The depravity of the piercing fluorescent lights, the detached hum of the air conditioning, the bland, solid whiteness of the furnishings — it all seems to have been designed with the intention of invoking these feelings of vacant obedience. This cubicle is a physical representation of our hero’s psyche — penned in, blank, colourless.

Make-Work Task #1 is executed with a defeated, disengaged acceptance. In more hopeful times he resisted — asked questions — but over the years came to see how the corporate structure is built; an artificial pyramid of psychopaths at the top, dangerously incompetent but socially-adept managers in the middle, and silenced, superficially categorized, and deliberately divided drones at the bottom.

It’s a scheme given the illusion of efficiency by a constant stream of assignments that bring no value or happiness to anyone, just more administration, more lies, and more suffering. He came to understand how this system worked — where the evil lay. It revealed itself in the language of the corrupted — in the extra-terrestrial corpspeak engineered to mask the shallowness of their personalities and the limitations of their usefulness. He has seen it evolve, new terms introduced and old ones falling out of favour.

Our hero checks his corporate email inbox; twelve unread. “Join us for the sixth annual…” Delete. “Organizational Announcement”. Delete. “Globocorp News, March 13”. Delete. “Thought you guys might find this usef…” Delete. “Celebrate diversity and inclu…” Delete. And so on, until he gets to three emails from his superior, Make-Work Tasks #2, #3 and #4. More manifestations of the corporate bullshit machine running as intended.

By now the undemanding ‘work’, stagnant air and visual sterility has deadened his senses and made him drowsy. He heads towards the lunchroom for another hour of grim self-reflection. It’s the extreme boredom, the repetition, the never-ending silence that makes him want to scream. Our hero struggles to contain the pent-up energy tearing at the seams of his skin.

Then it’s back for round two, the ‘home stretch’, for what will inevitably be another three hours struck off in digital escape and perhaps the completion of Make-Work Tasks #5 and #6, if he can bring himself to face them.

Bear witness to the concentration camp of a spiritually-dead civilization. The ‘final solution’ to prop up a long-failed economic model has been to construct a system based almost entirely on the impression of work and the suppression of individualism. The elites had taken a calculated gamble that the majority would fall for it — even be thankful for it. And they did. Unlike Auschwitz, there’s food and comfort in abundance, but this too is a place devoid of ambition, joy and hope. Sanity is all there is to be salvaged.

Our hero wonders if this is in fact the endpoint of the techno-corporate welfare system, such is the incongruity of what are supposed to be profit-generating organizations paying hordes of people to do work that has no effect on their bottom line. This is universal basic income, corporations and governments having become so intertwined and so little in need of the excess masses that they were required to create this elaborate ruse. Unthinking people were sent to cubicles thinking they had jobs to do, only to be sedated with the internet and beaten down with ideological uniformity and enforced mediocrity in exchange for the promise of luxury-laden survival. Questions of dignity, achievement and purpose had long been abandoned in favour of domestication, mass-scale control, and eternal amusement.

These are the thoughts he has every day, inescapable and undiscussable with those in the cubicles around him. He yearns to know if any of them feel the same way. Among his greatest fears is their total and universal lobotomization, for he knows this cannot happen to him. Sometimes he wishes it could, that they had a way.

Why does he stay in this office? Why doesn’t he quit?

He should be out there using his hands, working with other men, or using his head, making things, creating. Society’s idealization of the corporate career meant this dawned on him too late, and so he finds himself trapped in this strange, dead, alien place. He aches to learn philosophy, read great literature, explore the history of mankind, acquire survival skills and test his body’s endurance against the sun’s energizing rays. Have these urges been totally suppressed in his colleagues, all curiosity and motivation drained out of them? Is this enough, to belong to this faceless organization of the sitting dead? Were they made for mere submission and servitude? Were they born docile?

Our hero sports a sizable paunch and an arched back severely misshapen by years of motionless sitting. Endless, interminable sitting. Sitting waiting for something that never comes. There will be no rebellion, he accepts, as he looks around at his comrades tip-tapping away at their keyboards, gawking eyes glued to their screens. Every so often an office alpha male strides through to assert his dominance, breaking up the hushed tones of the gossiping secretaries. Sometimes there are new faces, but it’s always the same tedious pop culture conversation, the same passive-aggressive power plays, the same pointless meetings, and the same tedious small talk.

Everything here is clinically sanitized to ensure nothing is felt, that nothing is expressed; that all interactions are safe and predictable. Out of sight, but undoubtedly present, is the all-powerful HR-middle-management complex, guided by rigid doctrines dictating what can and cannot be said and done — deciding what, in fact, must be said and done to ‘fit in’. They are the social engineers defining ‘company values’, the rules that must not be broken, the red tape that must not be cut, snuffing out any possibility of unbridled creativity, wrongthink or competitive spirit.

There are no threats, no struggles and no victories. Just logging in, shutting up and zoning out.

It’s crossed our hero’s mind that if he could somehow push past these vague feelings of defended integrity, he could swallow enough shit, smile with enough saccharine plasticity and lick enough anus to rise through this hollow social hierarchy. After all, this is what they want him to do, and how hard could it be? But his desire to do so couldn’t be less existent; and at least this way, while his physical self may be enslaved, his mind remains, in some important way, free.

Often he dreams of something less — he pines for the life of a bum, or perhaps a third-worlder, bound not by contrived, comfortable sterility affording him a living he does not deserve, but by the need for gritty survival, day to day, which would have at least have meaning. He would have tangible goals, a purposeful connection to nature, his humanity and his mortality.

In the last hour he enters an almost dreamlike state, the promise of release from the cubicle’s chains tantalizingly close. Eyes glazed, mouth dry, irritated and lethargic, he sits patiently, waiting, watching the numbers in the bottom-right corner of the screen creep forward. 4:12. 4:13. 4:14, until finally, there it is, the euphoria of freedom, once again. A kind of unshackling that in the early, more hopeful days brought some light to his life, but which has since dulled alongside everything else. Nonetheless, he rises defiantly with what scraps of dignity remain, only to be greeted with great melancholy by the fading heat of a setting sun.

Twitter: @adamwinfield

Corporate Man’s Search for Meaning was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Cyber Security Requires an Important Ingredient: Strong UX

2018-03-13 17:57:05

In 1993, Apple hired its first User Experience Architect, Don Norman. Today, Norman is considered to be the father of human-centered design — also known as user experience design.

He even coined the term ‘user experience.’ Norman explains, “I invented the term because I thought human interface and usability were too narrow. I wanted to cover all aspects of the person’s experience with the system including industrial design, graphics, the interface, the physical interaction, and the manual.”

User experience isn’t just important for B2C companies like Apple. It’s equally crucial for B2B products and services, like company software programs. Just as Apple products need to be pleasant and easy-to-use for consumers, B2B software needs to be pleasant and easy-to-use for individual employees.

Standing out for having good UX and UI is also crucial for marketing purposes, since the effectiveness of cyber security tools depends on a customer’s ability and willingness to use them. A cyber security platform has to be both functional and user-friendly to attract customer attention and investor funding in the first place. As Hili Geva, COO of product agency Inkod, points out, “When it comes to UX for cyber security, the challenge is to turn the company solution into the most innovative and competitive cyber security platform. Differentiation is essential; the UX and UI of the product should wow customers and investors alike, generating the buzz required to help market the company.”

With UX for cyber security, a lot is at stake; the security of entire companies, in addition to the success of the software itself, hinges largely on the software’s ease of use.

UX and UI challenges in the cyber security industry

Improving cyber security UX and UI isn’t without its challenges. Users are reluctant to comply with security measures that prevent them from enjoying their work and other web browsing experiences. Employees whose companies insist that they not visit certain websites, for example, might only feel restricted and will see the security strategy as a nuisance rather than a help. Even worse, resentful users are prone to resist any security measures that they deem too intrusive, risking their company’s security if they choose to not comply.

Cyber security measures must therefore keep the user experience in mind. Security tools and software should be hassle-free, fairly nonrestrictive, and integrate smoothly with a user’s regular interface and workflow.

In other words, better protection should not mean a worse user experience. With that in mind, here are some important factors to consider for improved cyber security UX and UI.

Balancing security with UX

User experience experts develop their design strategy with the users’ needs in mind. In most industries, maximizing simplicity and ease of use for the customer is an obvious goal. But in the cyber security industry, this raises an interesting question: If there is indeed a trade-off between strong security and good UX, how can security be both effective and pleasant for the user?

Some security measures, like two-step authentication, are rarely user friendly. Users prefer simple, easy-to-use, minimal-fuss processes, and two-step authentication, which is both complex and time-consuming, tends to be the exact opposite.

While companies shouldn’t eliminate these ultra secure methods, they should make a point to focus on UX in areas where they feel the user experience has been compromised. If a company finds that it improves security when it adopts a new strategy, it shouldn’t consider the job done until the new strategy not only makes the system more secure but also is friendly to users.

Designing based on human interpretation

Many UX challenges for cyber security and other software programs happen because they were designed to reflect technical correctness — but didn’t necessarily align with user interpretation.

A particularly illustrative example of this dissonance is the error message pop-up of Microsoft Windows 3.1 through 98. The pop-up read, “This program has performed an illegal operation and will be shut down.” While the message did the job from a purely technical standpoint, the term “illegal” was naturally alarming to non-technical minded users.

Just because something is technically correct, that doesn’t mean it’s necessarily friendly for users. As you change or add new features to your product, double check to make sure the technical adjustments are reflected to the users in a way that they intuitively understand.

Minimizing complexity

When it comes to onboarding a new cyber security system, the software must be smoothly integrated into the existing network infrastructure without the need to remove or restructure existing tools. A new security tool or software shouldn’t disrupt an employee’s workflow or intrude on existing company programs.

Even after the onboarding process, cyber security tools need to be as easy as possible to use. Rather than overwhelm the user with complex technical data, these tools should have at-a-glance dashboards with easy-to-digest information.

When implemented with particular attention to UX, improved security isn’t in tension with user experience at all. On the contrary, it goes hand-in-hand with usability, since employees are more likely to abide by a user friendly security protocol in their day-to-day work. People want to use their devices in more secure ways — provided that doing so doesn’t cause hassle or inconvenience.

Cyber Security Requires an Important Ingredient: Strong UX was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

I ended up with my own dog breed/image API content in 10 minutes & you can have it too!

2018-03-13 17:14:28

I recently had to build a sample project for a programming tutorial I will be giving. And I thought to myself, what better way to make the work fun but to build the sample project around cute dog images?

I wanted to be able to show cute dog pictures of each dog breed. I thought of different ways of getting the dog breeds. I considered scraping dog breed lists from wikipedia or another dog site with breed lists. But taking 30 minutes to an hour to write a scraping script tailored to the site layout to gather the dog breeds and corresponding image URLs just seemed like overkill. And, it didn’t help that wikipedia doesn’t have the cutest dog pictures. That means I would have had to manually curate cute dog pictures… Gross, manual curation in the age of Artificial Intelligence? Not for me. There had to be a better way.

Turns out there was a dog breed/image api project! Once I found it, it felt like there was no wrong in the world. I mean, what matters when a dog api exists? But this would have meant that I’d have to add a networking layer to my sample app when my tutorial would not include networking, not good! It also meant that I’d be dynamically fetching images for a sample app that could be used by kids… really not good! New images could be NSFW and definitely not safe for kids. Interesting to note that there was a Reddit about a dog image api which does not guarantee that the returned image is a dog… Definitely not safe either! Seemed like more work with more risk, not a great combination. I had also seen one breed/image that seemed offensive so I wasn’t eager to promote that API.

Then it hit me: why not two birds with one stone and have my own dog image api content? The dog breed list endpoint always returned an identical list and the image call URL for the corresponding breed images had a fixed structure based on the breed name. The content had no copyright claims, so I could write a simple PHP script to get a starting point for my own breed dog images done in simple steps. With this I could control the image quality and safety. Here is a similar updated PHP script to get the images from me:

1: Have the breed list.

$breeds = array(“affenpinscher”, “airedale”, “akita”, “appenzeller”, “basenji”, “beagle”, “bluetick”, “borzoi”, “bouvier”, “boxer”, “brabancon”, “briard”, “bulldog”, “bullterrier”, “cairn”, “chihuahua”, “chow”, “clumber”, “collie”, “coonhound”, “corgi”, “dachshund”, “dane”, “deerhound”, “dhole”, “dingo”, “doberman”, “elkhound”, “entlebucher”, “eskimo”, “germanshepherd”, “greyhound”, “groenendael”, “hound”, “husky”, “keeshond”, “kelpie”, “komondor”, “kuvasz”, “labrador”, “leonberg”, “lhasa”, “malamute”, “malinois”, “maltese”, “mastiff”, “mexicanhairless”, “mountain”, “newfoundland”, “otterhound”, “papillon”, “pekinese”, “pembroke”, “pinscher”, “pointer”, “pomeranian”, “poodle”, “pug”, “pyrenees”, “redbone”, “retriever”, “ridgeback”, “rottweiler”, “saluki”, “samoyed”, “schipperke”, “schnauzer”, “setter”, “sheepdog”, “shiba”, “shihtzu”, “spaniel”, “springer”, “stbernard”, “terrier”, “vizsla”, “weimaraner”, “whippet”, “wolfhound”);

2. Loop through the list & download the breed image to it’s corresponding folder. I had to parse through JSON and that fun stuff but you don’t :

for ($a=0;$a < sizeof($breeds);$a++){
if (!file_exists($breeds[$a])) {
mkdir($breeds[$a], 0777, true);
downloadFile(“".$breeds[$a].’/’.$breeds[$a].’.jpg’, $breeds[$a].’/’.$breeds[$a].’.jpg’);
//Function to download file
function downloadFile($url, $path)
$newfname = $path;
$file = fopen ($url, ‘rb’);
if ($file) {
$newf = fopen ($newfname, ‘wb’);
if ($newf) {
while(!feof($file)) {
fwrite($newf, fread($file, 1024 * 8), 1024 * 8);
if ($file) {
if ($newf) {

Of course, an equivalent script in any language would do.

3. You now have your own safe set of dog breed images to start with and build something cool, celebrate!

Key Lesson: It never hurts to think through better solutions before implementation even when you have one that you know would work.

My recent article on the threat of open A.I was a bit too scary, so I felt I needed to write a “fluff piece” to even it out. In a follow up post, I’ll show how to use these images to do something really fun with deep learning and computer vision!

I ended up with my own dog breed/image API content in 10 minutes & you can have it too! was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Self Driven Data Science — Issue #39

2018-03-13 17:09:16

Self Driven Data Science — Issue #39

Here’s this weeks lineup of data-driven articles, stories, and resources delivered faithfully to your inbox for you to consume. Enjoy!

1*bc9 kpfjcpty w gojpbda

A Beginner’s Guide to Data Engineering — Part II

In this follow-up article, the author goes a bit more in-depth, focusing on building good data pipelines and highlighting ETL best practices using Python, Airflow, and SQL.


Stop Looking for Data Scientists

We are asking the wrong things from Data Scientists and we are looking in the wrong places. The author argues that data science is more about the intelligent use of programming, rather than programming itself.

When 20k means 20clustering 20fails

When K-Means Clustering Fails

How do we segment our market? Typically, one of the first approaches is by K-means clustering. As a popular data clustering technique, K-means is effective for project necessitated market segmentation.


How to Datalab: Running Notebooks Against Large Datasets

Streaming your big data down to your local environment is slow and costly. This article helps you further utilize interactive Python notebooks by running them in the cloud and therefore improving speed and data connectivity.

Predicting Upsets in the NCAA Tournament

It’s time for March Madness! Picking upsets correctly can distinguish your bracket and give you a competitive edge in your pool. This exploratory data analysis dives into predicting possible upsets and how to use an algorithmic edge to beat the odds.

Source: xkcd

Any inquires or feedback regarding the newsletter or anything else are greatly encouraged. Feel free to reach out to me on Twitter, LinkedIn, and check out some more content at my website.

Don’t forget to help me spread the word and share this newsletter on social media as well.

Thanks for reading and have a great day!

Self Driven Data Science — Issue #39 was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

How Chris Messina Works, and What’s the Future of AMAs?

2018-03-13 17:01:55

Founder Interview

Please welcome Chris Messina to Hacker Noon! Chris is internet famous for inventing the hashtag, and his new company,, is more than MDMA’s namesake. It was great to learn more about how he sees the internet and why he does the work he does. I hope you enjoy, and if you have another question for Chris, you can ask him on Molly :-)

David: Molly’s been described as a “automated, 24–7 AMA service.” How much of the product/service is aggregating content you put out on the internet versus actually writing new content about the content you put out on the internet?

Chris: Molly is still in her early infancy, and we want to provide a service that is straightforward and useful before we get too fancy by generating answers on behalf of people. To begin, we want to make it easy and enjoyable for Molly users to ask and answer questions directly, in their own words. At the same time, we believe that many answers to the questions that people have are already public and available on the social web, but are hard to find or are inconveniently spread across a number of platforms. Therefore, we encourage our users to connect to third party platforms like Medium, Twitter, Instagram, and others — so that we can automatically pull up content that Molly thinks might be related, or might help provide more context.

We have a long way to go to improve the relevance of some of these supporting results, but already we’re seeing thematic connections emerge across what were previously disconnected media sources… for example, a lot of people want to know about how Ryan Hoover built Product Hunt’s vibrant community, and in his answer on Molly, we automatically surface YouTube videos and podcasts he’s produced that allow the viewer to go much deeper into his thoughts and experience.

Molly has major personal branding implications. Personalities could in theory be interactive and accessible without the person spending the time to communicate… As more AI shapes brand interactions, how will authenticity be maintained?

Authenticity is closely connected with trust, and trust is a measure of consistency, predictability, and evolution across changing circumstances. Authenticity, therefore, is incredibly important in establishing, building, and maintaining human relationships over time. While, yes, we see a world in which AI counterparts like Molly can act and answer questions on behalf of people, they are not substitutes for the real thing as they lack qualities that make humans so interesting and so confounding.

To draw a crude analogy, if the car is a massive evolution in the capabilities of a bicycle (mobility, range, protection from the elements, capacity, etc), an interactive, personalized, and adaptive service like Molly will become a massive evolution over email vacation auto-responders, away messages, or phone answering machines — rather than a replacement for a person. Throughout time, people have always found themselves too busy or unable to get meet all the demands for their attention; Molly is an attempt to give people more focus on the interactions and connections that matter to them by taking care of some of the more rote or tedious requests for information that’s already been shared.

In the answers that Molly generates, the voice of the individual should persist and be maintained. She simply augments the ability of the individual to marshall the efforts and energy they’ve spent in the past to meet the demands of the present.

A first name dotcom! That’s awesome. How much did cost? Or if you can’t say the amount, can you share some details from the negotiation process for such a valuable property?

We can’t disclose how much we paid for the domain, but that’s actually the least interesting aspect of the story of how we ended up with It turns out that the domain had been owned by one woman since the 90’s — Molly Holzschlag. Molly is one of the first web standards pioneers and has spent her career advocating on behalf of the open web, among other things. I met Molly many many years ago through the web standards community and felt a kinship in our values and ideals about what the web represents as an open and free platform for publishing. We had already come up with the name Molly and so I thought — what the heck?, why not reach out and tell Molly about what we’re up to and see, in the off chance, if she might be interested in helping out?

After we told her our story and the problem that we were interested in addressing, she loved the idea and graciously agreed to let us use the domain for a fair price. Given how personal this story is, and in honor of Molly’s contributions to the web platform itself, we’re proud to be able to share her story and her symbolic contribution to what we’re doing now.

Molly’s homepage has some of the who’s who of personal branding in tech (Michael Seibel, Nisha Dua, Gary Vaynerchuk , Ryan Hoover, etc). Could you share a bit about who’s invested in Molly and who’s building Molly?

Our current investors include Betaworks, BBG Ventures, Crunchfund, Halogen Ventures, Y Combinator and a few strategic angels.

The Molly team is made up of three co-founders and a few remote engineers — definitely a scrappy crew considering we’ve built products on web, iOS, Messenger, SMS, as well as an Alexa skill in the relatively short period we’ve been working on this!

My background is in product design, UX, developer platforms, and building online communities. I spent several years at Google working on their APIs and social platform, and then a year at Uber where I lead developer experience. Esther was one of the first YouTube stars and translated that success into a successful consultancy helping large CPG brands find their online audiences. Later she helped smaller startups find their voice. Ethan has been hacking since he was six and has spent a lot of time working at startups and running machine learning projects related to personality and psychographics.

What’s Molly’s long-term vision?

Molly will be one of the first brands of the conversational computing era. By learning about the kinds of questions people ask each other and how they answer, she will be a resource that people consult when they want to learn about others or recall information that they may have forgotten. She will be one of the first post-feed, post-camera social agent platforms on the internet.

What are Molly’s KPIs and short term goals?

We’re primarily chasing after product-market fit right now — seeking engagement, utility, and helping Molly’s first users answer the questions that their friends and fans have for them. We want to encourage curiosity and learning about people’s experiences and perspectives — in fun, efficient, and creative ways.

In terms of short term goals, let me draw an analogy: if you think back to the era of landlines telephones, one of the inventions that made it easier to stay connected without having to be stuck by the receiver was the answering machine.

Indeed, in the era of AOL Instant Messenger and other chat services, we had the Away Message, and now in iOS there’s the Do Not Disturb and Do Not Disturb While Driving modes.

These are all useful ways to respond to someone that needs or wants your attention when you don’t have any to offer, or would prefer to get back to them later. But they’re also very dumb.

We believe that there’s already a lot of useful information that people often want to know about you that’s already available to them, but it’s just too hard for them to find it easily, quickly, or in a context that they’d think or know to look in. For example, if you wanted to know whether I like green tea or not before you invite me to meet at Samovar, you can ask Molly, and even though I haven’t specifically answered that question before, you can see from the Instagram photos and tweets that Molly found that indeed, I’ve certainly had some good looking green tea before!

I didn’t have to do anything extra for Molly to learn this information — she pulled it out my public archive of photos and tweets that I previously shared.

In the short term, we think that Molly can answer these and many other questions that you otherwise wouldn’t bother to ask, simply because the likelihood of getting a satisfying answer would be too low. The cost of asking these kinds of curiosity-driven questions (or even those that are pragmatic, i.e. about what kind of beverage to meet up for is) is too high, and we’re going to use AI, good product design, and emerging conversational interfaces to help lower those costs so that we can have our personal curiosities satisfied more quickly and easily than before.

Are you concerned or excited about the branding overlap with the other Molly (MDMA)?

We named our company Molly for several reasons:

  1. In the era of conversational brands, “billboard brand names” like Clorox and Duracell don’t work as well — you don’t really want to talk to them. This is one reason why, I’d wager, that Alexa has such an edge over the Google Assistant — it’s more natural to say out loud “Alexa, what’s the weather today?” than “Ok Google, what’s the weather?” The latter is simply more robotic and forced. “Molly”, meanwhile, is clearly a name that you’d call someone by. It’s friendly. When it comes to talking to Molly (say, via a voice or messaging interface), it’ll feel… normal.
  2. Another reason why Molly is a good name for our brand is recall. I install all kinds of apps and have enabled dozens of skills, but remembering the name of each one is incredibly challenging. We think that using a name like Molly makes it easier to remember her.
  3. The connection to MDMA isn’t lost on us, and the connection is also not unintentional. But while many people only think of MDMA as party drug, we’re thrilled that MDMA was recognized as a “breakthrough therapy” for posttraumatic stress disorder (PTSD) last year. This year we’re going to see the first FDA-approved Phase III trials of MDMA to treat patients with severe PTSD, and presuming they’re successful, will help to lessen the stigma associated with this substance. The effects of MDMA have been well documented and suggest a powerful ability to create deep empathy, connection, and understanding between people — and that’s absolutely an inspiration for the kind of effects we hope our products might create.

How far down the AI writing original content rabbit hole will Molly go?

We imagine a time when Molly will be able to synthesize answers on behalf of people, but that day is not today.

We are already experimenting internally with different approaches to answer generation based on recent machine learning techniques, but the state of the art is still immature and prone to error, due to the complex nature in how humans form questions.

A step towards what you’re suggesting is Molly suggesting relevant media or content that you’ve shared with her and she helps you to assemble an answer. For example, if someone asks you what your favorite book of 2017 was, Molly could show you all the books you’ve read to help you remember. Down the road, we’ll be able to highlight snippets of content that you’ve previously written or shared that might also answer the question — offering a kind of beefed up, personalized autosuggest tool. There’s already a tool in our alpha iOS app called “Ask Molly” that attempts to offer content to you to help you answer questions that you receive — but it’s rather limited so far.

In these early days, we’re spending a lot of effort figuring out how to consume content that people have published online and making sense of it, so we can repurpose it to answer questions automatically. As we get better at understanding the kinds of questions people ask and what typical responses look like, we’ll be able to automate more. For the near term, though, it’s going to be important to keep a human (you!) in the loop until we start composing original answers on your behalf, in your voice.

You invented the hashtag, launch tons of product, helped scale Mozilla, headed key roles at Goole & Uber and now you’re founding another tech company — where do you find the time? Can you walk us through what tasks you did over the last week in your work life?

It’s funny — day to day I feel pretty easily distractible and not that productive (just ask my colleagues, and they’ll confirm this!). That said, when I get into something, I really get in to it, and I tend to become the thing that I’m pursuing. I’ll give you a high level sense for a typical week — which admittedly is somewhat unusual because we’re at the tail end of Y Combinator’s Winter ’18 batch:

  • Hunted half a dozen new products (including a few YC batchmates!)
  • Got some great coverage on Molly by Molly over at The Ringer
  • Invited 1000 people to the brand new Molly community on Discord
  • Tweaked the design of our answer cards
  • Designed our Twitter answer embeds (example)
  • Designed and produced a bunch of animated Twitter promo cards (example)
  • Attended an afternoon + evening YC event
  • Attended a book launch lunch event with Chris Hughes (co-founder of Facebook)
  • Attended a Designer Fund/YC design workshop
  • Tweeted a bunch
  • Otherwise did a bunch of outreach to a lot of the first Molly users — gathering feedback, tracking down and filing bugs, backfilling content (interviews, AMAs, etc) and otherwise making small design and UI improvements

I’m sure there was more — but that covers some of the highlights!

AI in text based communication is growing like crazy. I think it’s about to blow up customer service. In the future (say 5 or so years), what percentage of text based customer service/support do you think will be powered by AI?

I don’t know if that’s exactly the right way to frame that question, although I agree with your general assertion. A better way is to ask: what experiences are humans really good at creating that machines aren’t, and which jobs can machines be trained to do better? My hope (though we’ll see how it plays out) is that humans and machines can become better collaborators, focusing on what each does uniquely well. In that light, 100% of customer service and support can be assisted by AI in the future, but that doesn’t mean that 0% of those tasks will be handled by humans.

In fact, I think there’s an entirely new domain that will emerge in this space that I call “relationship design”, and it goes beyond just mapping the customer journey and addressing pain points… it really requires an entirely new way to envision the relationships that people have with brands — as extensions and augmentations of themselves. AI will absolutely be part of that trend, but so will human curated experiences, insights, and creative innovations.

It’s wild that you changed the dictionary. “Inventor of the Hashtag.” If you could invent another word internet function and have others adopt it, what would it be and why?

Ha! Well… I tried! In 2009, two years after I proposed the hashtag, I proposed another convention call the “slashtag”. The idea was simple enough: “separate the meta from the meat” — i.e. add helpful information about your content to the end of your tweet, to make it easier to read and connect with people. A few friends use them, but they never really took off. Still, to this day, you see people really scatter hashtags all over their content, especially on Instagram, and it just makes it harder to read what could otherwise be simple and straightforward text. So — I think it’d be cool if more people adopted the behavior, but it’s not quite a widespread or painful enough problem to really catch on.

Also, we’re now in the era of visual and aural communication — so things like stickers, GIFs, amoji, voice recordings, and the like are crowding out the written word for some users… in order to influence the semiotics of a generation at this point I think you’d have to focus on multimedia rather than text.

You’ve hunted 1,468 products on Product Hunt. What’s the most common misconception about how to launch products?

It may seem obvious, but it’s really important for makers to engage the Product Hunt community with the story of why a product exists, what problem it’s trying to solve, how it solves it, and why that solution is meaningful or important to the maker. I mean, it should be common knowledge, but people are attracted to and compelled by stories they can relate to more than they are by ideas alone. The more that your product fits into a known, familiar, or relatable narrative — the more like I think you’ve got a chance to perform well when you’re hunted.

If you could change anything about how the internet functions today, what would it be?

This may sound funny — but I kind of wish that people had to look at a mirror when they published online. Like, there’s research that shows that putting mirrors in department stores deters shoplifting because it causes people to think twice about their actions because suddenly they’re confronted with an image of themselves that they don’t like.

The internet has become a pretty hostile environment — and it makes it hard to have meaningful and useful conversations that reveal commonality and connection. Rather than starting with trying to merely improve the connection between people, I think if we improved and strengthened the connection that people have with themselves and their behavior, some of what made the early social web so great and fulfilling might be restored.

Learn more at & ask Chris your own question at

How Chris Messina Works, and What’s the Future of AMAs? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Deploying Java Applications with Kubernetes and an API Gateway

2018-03-13 12:11:04

In this article you’ll learn how to deploy three simple Java services into Kubernetes (running locally via the new Docker for Mac/Windows Kubernetes integration), and expose the frontend service to end-users via the Kubernetes-native Ambassador API Gateway. So, grab your caffeinated beverage of choice and get comfy in front of your terminal!

A Quick Recap: Architecture and Deployment

In October last year Daniel Bryant extended his simple Java microservice-based “Docker Java Shopping” container deployment demonstration with Kubernetes support. If you found the time to complete the tutorial you would have packaged three simple Java services — the shopfront and stockmanager Spring Boot services, and the product catalogue Java EE DropWizard service — within Docker images, and deployed the resulting containers into a local minikube-powered Kubernetes cluster. He also showed you how to open the shopfront service to end-users by mapping and exposing a Kubernetes cluster port using a NodePort Service. Although this was functional for the demonstration, many of you asked how you could deploy the application behind an API Gateway. This is a great question, and accordingly we were keen to add another article in this tutorial series (with Daniel’s help) with the goal of deploying the “Docker Java Shopping” Java application behind the open source Kubernetes-native Ambassador API Gateway.

Figure 1. “Docker Java Shopping” application deployed with Ambassador API Gateway

Quick Aside: Why Use an API Gateway?

Many of you will have used (or at least bumped into) the concept of an API Gateway before. Chris Richardson has written a good overview of the details at, and the team behind the creation of the Ambassador API Gateway, Datawire, have also talked about the benefits of using a Kubernetes-native API Gateway. An API Gateway allows you to centralise a lot of the cross-cutting concerns for your application, such as load balancing, security and rate-limiting. In addition, an API Gateway can be a useful tool to help accelerate continuous delivery. Running a Kubernetes-native API Gateway also allows you to offload several of the operational issues associated with deploying and maintaining a gateway — such as implementing resilience and scalability — to Kubernetes itself.

There are several API Gateway choices for Java developers, such as Netflix’s Zuul, Spring Cloud Gateway, Mashape’s Kong, a cloud vendor’s implementation (such as Amazon’s API Gateway), and of course the traditional favourites of NGINX and HAProxy, and some of the more modern variants like Traefik. Choosing an API Gateway can involve a lot of work, as this is a critical piece of your infrastructure (touching every bit of traffic into your application), and there are many tradeoffs to be considered. In particular, watch out for potential high-coupling points — for example, the ability to dynamically deploy “Filter” Groovy scripts into Netflix’s Zuul enables business logic to become spread between the service and the gateway — and also the need to deploy complicated datastores as the end-user traffic increases — for example, Kong requires a Cassandra cluster or Postgres installation to scale horizontally.

For the sake of simplicity in this article we’re going to use the open source Kubernetes-native API Gateway, Ambassador. Ambassador has a straightforward implementation which reduces the ability to accidentally couple any business logic to it. It also lets you specify service routing via a declarative approach that is consistent with the “cloud native” approach of Kubernetes and other modern infrastructure. The added bonus is that routes can be easily stored in version control and pushed down the CI/CD build pipeline with all the other code changes.

Getting Started: NodePorts and LoadBalancers 101

First, ensure you are starting with a fresh (empty) Kubernetes cluster. This demonstration will use the new Kubernetes integration within Docker for Mac. If you want to follow along you will need to ensure that you have installed the Edge version of Docker for Mac or Docker for Windows, and also enabled Kubernetes support by following the instructions within the Docker Kubernetes documentation. We’re going to set up ingress first with a NodePort before switching to Ambassador. If you’re interested in learning more about the nuances of Kubernetes ingress, this article has more detail.

Next clone the “Docker Java Shopfront” GitHub repository. If you want to explore the directory structure and learn more about each of the three services that make up the application, then take a look at the previous article in this series or the associated mini-book “Containerizing Continuous Delivery in Java” that started all of this. When the repo has been successfully cloned you can navigate into the kubernetes directory. If you are following along with the tutorial then you will be making modifications within this directory, and so you are welcome to fork your own copy of the repo and create a branch that you can push your work to. I don’t recommend skipping ahead (or cheating), but the kubernetes-ambassador directory contains the complete solution, in case you want to check your work!

$ git clone
$ cd oreilly-docker-java-shopping/kubernetes
(master) kubernetes $ ls -lsa
total 24
0 drwxr-xr-x 5 danielbryant staff 160 5 Feb 18:18 .
0 drwxr-xr-x 18 danielbryant staff 576 5 Feb 18:17 ..
8 -rw-r — r — 1 danielbryant staff 710 5 Feb 18:22 productcatalogue-service.yaml
8 -rw-r — r — 1 danielbryant staff 658 5 Feb 18:11 shopfront-service.yaml
8 -rw-r — r — 1 danielbryant staff 677 5 Feb 18:22 stockmanager-service.yaml

If you open up the shopfront-service.yaml in your editor/IDE of choice, you will see that we are exposing the shopfront service as a NodePort accessible via TCP port 8010. This means that the service can be accessed via port 8010 on any of the cluster node IPs that are made public (and not blocked by a firewall).

apiVersion: v1
kind: Service
name: shopfront
app: shopfront
type: NodePort
app: shopfront
— protocol: TCP
port: 8010
name: http

When running this service via minikube, NodePort allows you to access the service via the cluster external IP. When running the service via Docker, NodePort allows you to access the service via localhost and the Kubernetes allocated port. Assuming that Docker for Mac or Windows has been configured to run Kubernetes successfully you can now deploy this service:

(master) kubernetes $ kubectl apply -f shopfront-service.yaml
service “shopfront” created
replicationcontroller “shopfront” created
(master) kubernetes $ kubectl get services
kubernetes ClusterIP <none> 443/TCP 19h
shopfront NodePort <none> 8010:31497/TCP 0s

You can see the shopfront service has been created, and although there is no external-ip listed, you can see that the port specified in the stockmanager-service.yaml (8010) has been mapped to port 31497 (your port number may differ here). If you are using Docker for Mac or Windows you can now curl data from localhost (as the Docker app works some magic behind the scenes), and if you are using minikube you can get the cluster IP address by typing minikube ip in your terminal.

Assuming you are using Docker, and that you have only deployed the single shopfront service you should see this response from a curl using the port number you can see from the kubectl get svc command (31497 for me):

(master) kubernetes $ curl -v localhost:31497
* Rebuilt URL to: localhost:31497/
* Trying ::1…
* Connected to localhost (::1) port 31497 (#0)
> GET / HTTP/1.1
> Host: localhost:31497
> User-Agent: curl/7.54.0
> Accept: */*
< HTTP/1.1 500
< X-Application-Context: application:8010
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Tue, 06 Feb 2018 17:20:19 GMT
< Connection: close
* Closing connection 0
{“timestamp”:1517937619690,”status”:500,”error”:”Internal Server Error”,”exception”:”org.springframework.web.client.ResourceAccessException”,”message”:”I/O error on GET request for \”http://productcatalogue:8020/products\": productcatalogue; nested exception is productcatalogue”,”path”:”/”}

You’ll notice that you are getting an HTTP 500 error response with this curl, and this is to be expected as you haven’t deployed all of the supporting services yet. However, before you deploy the rest of the services you’ll want to change the NodePort configuration to ClusterIP for all of your services. This means that each services will only be accessible other the network within the cluster. You could of course use a firewall to restrict a service exposed by NodePort, but by using ClusterIP with our local development environment you are forced not to cheat to access our services via anything other than the API gateway we will deploy.

Open shopfront-service.yaml in your editor, and change the NodePort to ClusterIP. You can see the relevant part of the file contents below:

apiVersion: v1
kind: Service
name: shopfront
app: shopfront
type: ClusterIP
app: shopfront
— protocol: TCP
port: 8010
name: http

Now you can modify the services contained with the productcatalogue-service.yaml and stockmanager-service.yaml files to also be ClusterIP.

You can also now delete the existing shopfront service, ready for the deployment of the full stack in the next section of the tutorial.

(master *) kubernetes $ kubectl delete -f shopfront-service.yaml
service “shopfront” deleted
replicationcontroller “shopfront” deleted

Deploying the Full Stack

With a once again empty Kubernetes cluster, you can now deploy the full three-service stack and the get the associated Kubernetes information on each service:

(master *) kubernetes $ kubectl apply -f .
service “productcatalogue” created
replicationcontroller “productcatalogue” created
service “shopfront” created
replicationcontroller “shopfront” created
service “stockmanager” created
replicationcontroller “stockmanager” created
(master *) kubernetes $ kubectl get services
kubernetes ClusterIP <none> 443/TCP 2h
productcatalogue ClusterIP <none> 8020/TCP 1s
shopfront ClusterIP <none> 8010/TCP 1s
stockmanager ClusterIP <none> 8030/TCP 1s

You can see that the port that was declared in the service is available as specified (i.e. 8010, 8020, 8030) — each pod running gets its own cluster IP and associated port range (i.e. each pods gets its own “network namespace”). We can’t access this port outside of the cluster (like we can with NodePort), but within the cluster everything works as expected.

You can also see that using ClusterIP does not expose the service externally by trying to curl the endpoint (this time you should receive a “connection refused”):

(master *) kubernetes $ curl -v localhost:8010
* Rebuilt URL to: localhost:8010/
* Trying ::1…
* Connection failed
* connect to ::1 port 8010 failed: Connection refused
* Trying…
* Connection failed
* connect to port 8010 failed: Connection refused
* Failed to connect to localhost port 8010: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8010: Connection refused

Deploying the API Gateway

Now is the time to deploy the Ambassador API gateway in order to expose your shopfront service to end-users. The other two services can remain private within the cluster, as they are supporting services, and don’t have to be exposed publicly.

First, create a LoadBalancer service that uses Kubernetes annotations to route requests from outside the cluster to the appropriate services. Save the following content within a new file named ambassador-service.yaml. Note the annotation. You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects, and clients such as Ambassador can retrieve this metadata.

apiVersion: v1
kind: Service
service: ambassador
name: ambassador
annotations: |
apiVersion: ambassador/v0
kind: Mapping
name: shopfront
prefix: /shopfront/
service: shopfront:8010
type: LoadBalancer
- name: ambassador
port: 80
targetPort: 80
service: ambassador

The Ambassador annotation is key to how the gateway works — how it routes “ingress” traffic from outside the cluster (e.g. an end-user request) to services within the cluster. Let’s break this down:

  • | specifies that this annotation is for Ambassador
  • apiVersion: ambassador/v0 specifies the Ambassador API/schema version
  • kind: Mappingspecifies that you are creating a “mapping” (routing) configuration
  • name: shopfront is the name for this mapping (which will show up in the debug UI)
  • prefix: /shopfront/ is the external prefix of the URI that you want to route internally
  • service: shopfront:8010 is the Kubernetes service you want to route to

In a nutshell, this annotation states that any request to the external IP of the LoadBalancer service (which will be localhost in your Docker for Mac/Windows example) with the prefix /shopfront/ will be routed to the Kubernetes shopfront service running on the (ClusterIP) port 8010. In your example, when you enter http://localhost/shopfront/ in your web browser you should see the UI provided by the shopfront service. Hopefully this all makes sense, but if it doesn’t then please visit the Ambassador Gitter and ask any questions, or ping me on twitter!

You can deploy the Ambassador service:

(master *) kubernetes $ kubectl apply -f ambassador-service.yaml
service “ambassador” created

You will also need to deploy the Ambassador Admin service (and associated pods/containers) that are responsible for the heavy-lifting associated with the routing. It’s worth noting that the routing is conducted by a “sidecar” proxy, which in this case is the Envoy proxy. Envoy is responsible for all of the production network traffic within Lyft, and its creator, Matt Klein, has written lots of very interesting content about the details. You may have also heard about the emerging “service mesh” technologies, and the popular Istio project also uses Envoy.

Anyway, back to the tutorial! You can find a pre-prepared Kubernetes config file for Ambassador Admin on the website (for this demo you will be using the “no RBAC” version of the service, but you can also find an RBAC-enabled version of the config file if you are running a Kubernetes cluster with Role-Based Access Control (RBAC) enabled. You can download a copy of the config file and look at it before applying, or you apply the service directly via the Interwebs:

(master *) kubernetes $ kubectl apply -f
service “ambassador-admin” created
deployment “ambassador” created

If you issue a kubectl get svc you can see that your Ambassador LoadBalancer and Ambassador Admin services have been deployed successfully:

(master *) kubernetes $ kubectl get svc
ambassador LoadBalancer <pending> 80:31053/TCP 5m
ambassador-admin NodePort <none> 8877:31516/TCP 1m
kubernetes ClusterIP <none> 443/TCP 20h
productcatalogue ClusterIP <none> 8020/TCP 22m
shopfront ClusterIP <none> 8010/TCP 22m
stockmanager ClusterIP <none> 8030/TCP 22m

You will notice on the ambassador service that the external-ip is listed as <pending> and this is a known bug with Docker for Mac/Windows. You can still access a LoadBalancer service via localhost — although you may need to wait a minute or two while everything deploys successfully behind the scenes.

Let’s try and access the shopfront this now using the /shopfront/ route you configured previously within the Ambassador annotations. You can curl localhost/shopfront/ (with no need to specify a port, as you configured the Ambassador LoadBalancer service to listen on port 80):

(master *) kubernetes $ curl localhost/shopfront/
<!DOCTYPE html>
<html lang=”en” xmlns=”">
<meta charset=”utf-8" />
<meta http-equiv=”X-UA-Compatible” content=”IE=edge” />
<meta name=”viewport” content=”width=device-width, initial-scale=1" />
<! — The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
<! — jQuery (necessary for Bootstrap’s JavaScript plugins) →
<script src=”"></script>
<! — Include all compiled plugins (below), or include individual files as needed -->
<script src=”js/bootstrap.min.js”></script>

That’s it! You are now accessing the shopfront service that is hidden away in the Kubernetes cluster via Ambassador. You can also visit the shopfront UI via your browser, and this provides a much more friendly view!

Bonus: Ambassador Diagnostics

If you want to look at the Ambassador Diagnostic UI then you can use port-forwarding. We’ll explain more about how to use this in a future post, but for the moment you can have a look around by yourself. First you will need to find the name of an ambassador pod:

(master *) kubernetes $ kubectl get pods
ambassador-6d9f98bc6c-5sppl 2/2 Running 0 19m
ambassador-6d9f98bc6c-nw6z9 2/2 Running 0 19m
ambassador-6d9f98bc6c-qr87m 2/2 Running 0 19m
productcatalogue-sdtlc 1/1 Running 0 22m
shopfront-gr794 1/1 Running 0 22m
stockmanager-bp7zq 1/1 Running 1 22m

Here we’ll pick ambassador-6d9f98bc6c-5sppl. You can now port-forward from your local network adapter to inside the cluster and expose the Ambassador Diagnostic UI that is running on port 8877.

(master *) kubernetes $ kubectl port-forward ambassador-6d9f98bc6c-5sppl 8877:8877

You can now visit http://localhost:8877/ambassador/v0/diag in your browser and have a look around!

When you are finished you can exit the port-forward via ctrl-c. You can also delete all of the services you have deployed into your Kubernetes cluster by issuing a kubectl delete -f . within the kubernetes directory. You will also need to delete the ambassador-admin service you have deployed.

(master *) kubernetes $ kubectl delete -f .
service “ambassador” deleted
service “productcatalogue” deleted
replicationcontroller “productcatalogue” deleted
service “shopfront-canary” deleted
replicationcontroller “shopfront-canary” deleted
service “shopfront” deleted
replicationcontroller “shopfront” deleted
service “stockmanager” deleted
replicationcontroller “stockmanager” deleted
(master *) kubernetes $ kubectl delete -f
service “ambassador-admin” deleted
deployment “ambassador” deleted

What’s Next?

Ambassador makes canary testing very easy, so look for a future article that explores that topic with Java microservices. Other topics that we’ll explore is integrating all of this into a CD pipeline and how to best to set up a local development workflow. In addition, Ambassador supports gRPC, Istio, and statsd-style monitoring which are all hot topics in cloud-native environments today. If you have any thoughts or feedback, please feel free to comment!

Deploying Java Applications with Kubernetes and an API Gateway was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

How Verifier Will Transform Global Business Fact Checking

2018-03-13 12:06:02

Digital Transformation is the idea that every business, large or small, must learn to integrate technology into their daily functioning. In fact, by the end of 2017, two-thirds of the CEOs of Global 2000 companies will have digital transformation at the center of their corporate strategy and blockchain is quickly becoming the frontrunner of emerging technologies to explore.

Whether that means increased integration of AI into content creation or the computerization of certain tasks traditionally left for humans digital transformation means increased efficiency and functioning into the 21st century.

At the core of this transformation is data and questions about how to secure it and verify it. Verification is key part of any small business or large organization. According to research by Markets and Markets, the identity and access management market will continue to grow: by 2021, it will nearly double to $14.82 billion, compared to from $8.09 billion in 2016.

From verifying if clients are being honest in their accounts of previous dealings or their accounting to the company itself being trusted for the same ideas, no small business’s relationship can grow and thrive without a reputation comparable to larger business of similar.

Their partners need to have faith in the business itself. Verification of certain financial aspects of business are vital and the certainty they are performing services as promised is crucial. On a global scale, this can mean the verification of news and major events in real-time.

While larger businesses can afford independent audits, this cost can be prohibitive to small companies. This affects the trust their parties can have in them and limit their businesses. Consistent 3rd Party verification could help improve the trust and functioning of any small business.

This is why the new technology of Verifier could transform how information is processed and approved. By creating a secure, decentralized, and democratic system of verification, the start-up could use blockchain to aid both private and public enterprises and even governments.

The Problem With Current Forms of Verification

Current modes of verification have many weaknesses that disadvantage small businesses. First, it can be expensive. Hiring independent auditors can cost thousands of dollars, something that many companies can’t afford before establishing business.

Secondly, verifications can be time-consuming. This can be especially bad for small businesses who need to established secure and trusted relationships with clients. The wait for small businesses, who are competing with larger more established reputations. The need for instant and cheap verification is a necessary part of any small business.

Conventional verification services use email or other insecure modes of communication to transfer confidential data. These are often insecure and their reliance on centralized servers leave them vulnerable.

For these many reasons, there is a barrier for small businesses to building trust. The ability to clients to easily and quickly verify that their services is key to building a customer base. This is where blockchain can be used by small businesses to help transform their businesses.

Why Verifier is The Solution

This is why Verifier could be the affordable blockchain based tool for digital transformation using both B2B and P2P solutions. Robust in nature, Verifier can be applied to many industries including financial services, logistics and transport, ecommerce and more.

Verifier works by connecting those who need verification with “agents,” or those who would be willing to go confirm whatever the Verifier user, whether that is the business or their client requests.

Guaranteeing an active and effective market for agents willing to carry out the service is the exchange of Verifier’s native crypto token, Verifier tokens in exchange for confirmation This instant and efficient payment can then be exchanged on any crypto market in exchange for Bitcoin or Ethereum.

Even better for those in need of verification — users can commission multiple agents at once, meaning there is a type of insurance for any request.

The scenarios where Verifier could help organizations are numerous, while being affordable, quick, and secure. Services that used to cost into the hundreds of thousands of dollars could be crowdsourced for a fraction of the price.

Verifier’s Founder, Dmitry Nazarov explains, “Everytime when technologies allowed to shorten a transaction cycle or decrease transactional costs, that followed to a major economic growth. I believe that our protocol Proof of Witness is the basis for future growth in the era of digital transformation in different industries, including fintech, retail, insurance etc.”

Real Life Use Cases Already in Action

For example, when a small business is buying products online, a seller might not have put the exact product you’ve ordered or might have put a product of poor quality into the package. If one can’t receive the package personally, how to do ensure that you’ve received the right order?

Additionally, in signing a contract, the signature and identities of the signing persons and the date of signing must be verified. Notarization can be expensive, costly and slow. Verifier would allow small businesses to instantly and cheaply utilize those services for much less.

Digital transformation is an inescapable part of a functioning business into the future. The ability to easily integrate technology into functioning. The global reach, decentralization, and anonymity provided to both agents and clients can

Verifier has a chance not only to transform verification, but to use blockchain to alter small businesses for the better. This could mean that blockchain and its typical uses no longer have to be confined to stores of value. Verifier’s unique use of human agents as “nodes” of a sort could alter the way people see the possibilities of blockchain.

A technology like Verifier would be a destabilizing force for many industries for the best reason. Innovators of blockchain would alter how they view the definition of value-added, and innovators of other technologies would see how blockchain’s qualities can positively contribute to any service. Verifier could inspire a new generation of innovators across the board. How do you see Verifier’s strengths altering the tech sector?


The author has had a working or personal relationship with one or more companies mentioned in this article in the past. Access to mentioned company’s management and information was made through the author’s personal network. All information was vetted prior to posting.


This essay is not intended to be a source of investment, financial, technical, tax, or legal advice. All of this content is for informational purposes only.

How Verifier Will Transform Global Business Fact Checking was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

How Agora Will Use Blockchain to Bring True Democracy to the World

2018-03-13 12:06:02

It seems that our world is entering a very dangerous time. Regardless of one’s political views, it is impossible to ignore the growing dangers stemming from modernization. With the increased presence of technology in everyday life, governments are able to intrude on personal privacy through internet histories, social media, cell phone data and more.

China recently made news by announcing the implementation of a “social capital” system, which consists of digitally tracked data points that will determine the liberties one will have in day-to-day life.

However, technology can also be used to improve popular rule, rather than threaten it, and one blockchain start-up is committed to this idea. Agora, a project spun out of EPFL’s Swiss Lab for Digital Democracy, has created a blockchain voting platform dedicated to ensuring transparent and verifiable elections around the world.

Democratic elections in the 21st century are not only a feat of political achievement, but also of logistical and bureaucratic organization. The efficient and timely processing of millions of votes can be a challenging undertaking, with hurdles that arise in both paper ballot and electronic voting processes. Security around ballots and polling stations is another considerable challenge.

With these modern issues in mind, blockchain technology is uniquely suited to serve the interests of a modern democracy and its voting needs. Agora, in particular, offers a comprehensive technological offering to any state that seeks to implement a secure and fair election.

Currently, digital voting is plagued by flawed Electronic Voting Machines, or EVMs. These EVMs, as elections around the US and the world in the past few years have proved, are highly fallible. Indeed, at a tech conference just last year, engineers were able to hack a standard US voting machine in just over an hour.

Security vulnerabilities are not just applicable to single machines, either, as electronic voting architectures often utilize more centralized tally and control systems. A single hacked EVM may not swing an election, but a server where a millions of votes are stored could be compromised by increasingly sophisticated hackers.

This is why the decentralization, security and traceability offered by Agora’s technology is more vital now than ever. Operating through a multi-layered blockchain architecture, Agora has the ability to improve upon and protect against each of the weaknesses present in current EVMs. Agora could be the tool that democracies use to transport their elections into the digital age.

Agora Brings Transparency and Security to Voting

By moving to a digital solution, governments can remove elements that often slow down and negatively affect democratic participation. Paper ballots and election employees, for example, are a high expense item in paper elections. These costs can cause election officials to reduce the number of voting locations, which increases the time it takes for some voters to participate. Even once an individual has arrived at a voting location, wait times can stretch many hours as a result of overcrowding. This issue can cause many potential voters to simply decline to vote.

Over the long run, electronic voting on Agora’s platform from one’s own personal device would allow millions of voters to access their ballot from their work or home, causing an expansion in voter turnout. While previously contemplated remote voting technologies have sacrificed privacy, Agora’s technology assures anonymity through its ballot anonymization algorithm.

In addition to this aforementioned convenience, Agora’s technology could solve one of the greatest dangers facing democracy today: security. With an immutable blockchain that distributes voting data across many nodes, Agora provides a publicly auditable data trail that cryptographically proves an election remains untampered.

In addition to its proprietary blockchain, Agora’s Cotena layer copies periodic snapshots of lower layers of Agora’s network to the Bitcoin blockchain to provide immutable security to all voting data. Altering data is only as possible as hacking the Bitcoin network itself, which does not presently seem likely to occur.

Finally, one of the most revolutionary aspects of Agora is its network’s ability to be audited by third parties and voters themselves. Politically-motivated recounts may vanish, as anyone can audit an election and monitor the results. This open and transparent level of democracy could be transformative.

For these reasons, Agora could be the future of election security.

March 2018: Test Pilot in the Sierra Leone 2018 Presidential Election

In March 2018, Agora deployed its digital voting platform in Sierra Leone’s presidential elections. The process ensured that each vote was unique, secure and logged on Agora’s immutable blockchain.

This election highlights just how important transparency is in voting. Agora’s involvement in Sierra Leone provides legitimacy to their election results and the country’s democratic process, in general.

Agora and its innovative use of blockchain technology is coming at a key time in global politics. Democracies seem to be under attack not only in practice, but ideologically too, as some governments suggest that democracy is inefficient and prone to disruption. Agora could allow democracies to protect themselves from these threats, while maintaining a defensible political system.


The author has had a working or personal relationship with one or more companies mentioned in this article in the past. Access to mentioned company’s management and information was made through the author’s personal network. All information was vetted prior to posting.


This essay is not intended to be a source of investment, financial, technical, tax, or legal advice. All of this content is for informational purposes only.

How Agora Will Use Blockchain to Bring True Democracy to the World was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

3 Innovative Ways the Blockchain can Ramp up Your In-game Revenue

2018-03-13 11:51:01

Game development is a high-risk, high-reward business that requires substantial investments in programming, math, and sound and graphic design. So developers are understandably wary of committing to, and spending money on, an additional technology such as the blockchain. The good news is that there are some awesome projects underway aiming to make it easy and seamless to integrate the blockchain into your games.

In-game items and payments are being decentralized. With the right tools, you won’t just get access to helpful new technology. You’ll be opening up your games to a truly global gaming community. However, to do it right, you’re going to have to be part of a massive industry rework.

Making money from games

The Internet has completely changed the economics of the video game industry. It used to be simple; build a game then sell it to players for a fixed price. The development of online gaming has made things more complicated. Players now want to play alongside other players from around the world in games like World of Warcraft, earn in-game currencies and items, trade with each other, and experience constantly evolving virtual worlds. These days, games need investments and development long after release.

Combine this with the fact that development costs are spiraling out of control and that the retail prices for many types of games are actually going down, and you can see why the industry is in need of new ideas. The old economic model doesn’t work in this new environment. Developers are now trying out different ways to make money on their skills and creativity.

Advertising and extra downloadable content are good streams of continuous revenue for some, but many players are frustrated with where this model is heading. We’ll be looking at what might be the most exciting and controversial method being developed in the gaming industry: In-game purchases.

In-game transactions

We’ve all encountered in-game transactions before, especially in mobile game. Many mobile games are free to download and use, but certain features will be locked until you pay. These games are called free-to-play.

Since the 2000s, small in-game “microtransactions” of $1–5 have also started to find their way into console games too. One of the first examples of this business model was the controversial horse armor pack: A $2.50 add-on for The Elder Scrolls IV: Oblivion. The gaming community had mixed reactions to this development. Understandably, some hate the idea of having to pay even more after paying full price for a game. But others love the idea, and they are more than happy to pay for in-game extras. Sometimes they pay hundreds or even thousands of dollars on a regular basis to access these items.

In-game purchases and microtransactions let players spend as much as they want. Many gamers get to play epic games for free or at a heavily discounted price, while the big spenders can splash out new characters and items, paying for the bulk of the game’s development costs in the process. We’ve seen this happen in the mobile market where at one point 50% of mobile game revenue came from just 0.15% of players.

This new model requires a different way of thinking. Xbox Live’s general manager Cam Ferroni said, “You have to stop looking at video games as a toy and start looking at them as an entertainment service.”

So as a developer, what tools are there to help you deliver this entertainment service without significant time or financial investments?

Current platforms

Platforms like Steam and Roblox are currently the best outlets for game developers to create and market in-game items to players. Steam allows developers to use their marketplace and in-game payment pathways to sell in-game items and generate revenue from games continuously. Roblox is a game development platform that lets you build games and items to sell to other players. While these platforms have paid out millions to developers, they aren’t perfect.

Payments are slow, and transactions on credit cards and PayPal massively cut into the profits for small transactions. Valve’s cut of transactions on Steam isn’t published, but apparently it lies around the 30% mark. The Roblox marketplace fee is around the same. The marketplaces are also tightly controlled and closed off from each other and other game marketplaces. A new business model is desperately needed.

Blockchain solutions for in-game monetization

The blockchain is the ideal solution for many of these problems. It isn’t just going to make things easier, it’s going to change how we think about games.

This isn’t just a guess either. The first blockchain games are out, and people are going crazy about them. It all started with Crypto Kitties, a blockchain-based game that allows you to own and breed new cartoon cats. Each crypto kitty is “one-of-a-kind and 100% owned by you; it cannot be replicated, taken away, or destroyed.” They are cryptocollectibles. The game is so popular that, at times, it has taken up to 30% of the transactions on the Ethereum network, and certain kitties are selling for tens of thousands of dollars. Total Kitty sales are over $22 million in just the first few months, and the developers behind Crypto Kitties take 3.75% of every transaction. Crypto-asset trading can be very lucrative for developers.

Crypto Kitties is just the beginning. Streamlined transactions, true ownership of digital assets, and seamless integration with eSports betting are some of the immediate benefits of blockchain technology for both developers and players. And there are some killer projects to help developers make it happen.

1. Cryptocurrency: In-Game Currency 2.0

Streamlined transactions are the most immediate benefit after implementing blockchain technology. Cryptocurrency transactions happen on a blockchain, and when implemented well they are fast, safe, and cheap.

With cryptocurrency, in-game transactions can cost mere pennies and happen in seconds. This makes true microtransactions a much better prospect, especially for developers. Even the cheapest game items can have big profit margins, and the money will be in your account almost instantly.

Digital coins and currencies in games are nothing new, the difference with cryptocurrencies is that they are decentralized. That means transactions don’t need to go through a central server. Think about how Bitcoin removes the need for banks in transactions. This means that players take responsibility for their own tokens, and store them in their own wallets. As a developer, you won’t need to worry about security, fraud, refunds, and many other headaches and legal issues that come with the responsibility of running a centralized virtual currency.

Another great thing is you don’t even have to implement all this yourself. BitGuild is a project that’s making cryptocurrencies easily accessible to developers. At the core of that platform will be a gaming-specific cryptocurrency called PLAT token. Gamers will be able to use PLAT tokens to buy and trade games, in-game items, and currencies.

The idea is to make the blockchain plug-and-play. When the platform is complete, you will be able to easily integrate PLAT payments directly into your games. In-game payments and microtransactions will be available without having to setup up all the different payment channels. Players can buy or earn PLAT tokens and use them on any of the games using the BitGuild platform, or they can sell their tokens to cash out.

In-game economies are a big deal. They had already grown to a $15 billion dollar industry in 2012, with $2 billion in World of Warcraft alone. Crypto tokens could make these economies an order of magnitude larger yet again.

Cryptocurrencies do come with new bottlenecks, however. Blockchain transactions can’t be refunded, and players will be responsible for their own currency. This means if players and developers aren’t careful, there is the potential for hacks that can’t be reversed like the ones we’ve seen with Bitcoin. Also, the Ethereum and Bitcoin blockchains are currently having problems with congestion. There are some promising solutions being looked at, but this could be a problem if it gets worse.

2. Real Ownership of In-Game Assets

Have you ever thought that that new Counter Strike skin you landed was cool? Imagine the same item, but living permanently on the blockchain, provably unique and stored in your own wallet. Now that’s boss!

Digital tokens can represent more than just currency. Game assets like items and skins can be coded into crypto tokens along with their unique appearance, characteristics, and histories. Bitcoin works as a currency because Bitcoins can’t be copied or replicated. The same can happen for in-game assets. These characteristics mean that players will be able to truly own their digital assets. Items are all completely unique, just like items in the real world.

Crypto Kitties proved that people love true ownership of digital items, and it’s easy to see how this idea can translate into the video game world. If people are going crazy for unique cats, imagine what will happen when unique characters and items with their own histories and characteristics are available for use within popular games.

Enjin Coin provides a platform to do this. You can use Enjin Coin to “mint unique in-game items, currencies, and virtual tokens using Enjin Coins as the parent currency. These assets can be converted back into ENJ anytime.”

Based on Ethereum, it’s a collection of open source smart contracts and software development kits that you can use to easily integrate the blockchain into your games. Your players can even trade assets across other games that use Enjin Coin too. Enjin is already a powerful social gaming platform, and having a common decentralized currency marketplace can unite these communities even more.

Counter Strike: Global Offensive was widely regarded as a flop when it first came out. That was, until they released The Arms Deal Update, allowing players to trade weapons skins with each other and “experience all the illicit thrills of black market weapons trafficking.” After that update, the number of monthly players skyrocketed, growing 26 fold in the next three years. That’s a ridiculously high jump, and the same could happen for the whole gaming industry.

Players are begging to trade useful and unique items with each other. Blockchain platforms like Enjin Coin could help you integrate these features into your games and open up new revenue channels in the process.

As for cryptocurrency, a potential problem with tokenized items is that players have to take responsibility for them. Developers will need to find ways to help all players do this effectively. Also, items will only be transferable between games that use the same blockchain. Choosing the right one will be important.

3. Direct Betting in Competitions

Streaming platforms like Twitch have brought eSports to the big leagues. The industry is projected to have over a half a billion viewers by 2020. But there’s still one key ingredient missing. Betting. In regular sports, revenue from betting on a game far outweighs the revenue from everything else. So far, the eSports betting markets betting hasn’t enjoyed the same success.

ESports betting is already measured in the billions of dollars but the industry is heavily restricted by prohibitive laws in countries like the US. The potential is there, but most of this betting is concentrated in only a few games. The main game is League of Legends which has a 38% market share and more viewers than the NBA finals. All this has been done in just a few years.

But a large problem is prohibitive regulations. In fact, eSports betting is banned in the US. Unikrn is an established eSports betting platform, and UnikoinGold is their blockchain solution to these problems. They intend to use the power of cryptocurrency to open up eSports betting across the world. Users will be able to bet on professional eSports matches, play for UnikoinGold in competitive video game matches, and host tournaments. No bank account is required.

Developers should be excited about this. UnikoinGold will be an extension to platforms like Twitch that could allow players to bet on the games they are streaming from within their game launcher. Just plug it in and offer a completely frictionless streaming/betting experience to the players.


The video game industry is bigger and more connected than ever before, and blockchain technology is showing how it can open up whole new ways to monetize games. There are some great new projects making it easier for developers to integrate these features into their games. The blockchain can’t help design the characters or create epic battles, but it can take care of in-game monetization. Savvy developers are experimenting with the blockchain, and many are making money from it already. Those who don’t are at risk of getting left behind.

3 Innovative Ways the Blockchain can Ramp up Your In-game Revenue was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Telemedicine’s Potential Could Be Crushed By Security Flaws

2018-03-13 04:00:06

Despite what “Grey’s Anatomy” gets wrong about medicine, HIPAA, interpersonal relationships, and, well, everything else, they do get some things right. The show recently aired a two episode arc about cybersecurity and ransomware, and it was dramatic. Patient charts were unavailable. Necessary blood and medicine were inaccessible behind keypad-locked doors. Lives hung in the balance as doctors operated without monitors and nurses tried to remember who was given what medicine when. The FBI was called in.

It was a perfectly orchestrated, panicky circus.

While the drama may not have played out the way it would in a realistic, functioning hospital, the fundamental issue hit close to home. Technology is a gift, and facilities are increasingly relying on monitors and electronic records to deliver care. There is nothing wrong with utilizing technology to improve a sector as necessary as healthcare, but the potential weaknesses of electronic data and monitoring may compromise the entire system.

The Future of Telehealth

Telehealth carries with it the promise of improved plan adherence, increased access to healthcare in rural areas, and a greater analytic power for population health trends and solutions. Big data promises the ability to make community health models a functioning reality, and personalized health plans implemented through applications stand to change the future of public health. Already, 75 percent of practitioners believe that technology is facilitating the delivery of better care, and we’ve just begun.

For rural communities and underserved populations, this trend is especially optimistic. By increasing the number of patients a provider can reach in a period of time, as well as making patient records more accessible to specialists and cooperative care networks, the move to electronic health records and telemedicine means that previously isolated individuals will have access to a greater range of providers. In conjunction with increased access to patient information, nurses are seeing new legislation come into play that allows them to treat patients across state lines without obtaining a new license, allowing the scope of rural providers to be more flexible.

Furthermore, as patients are prescribed treatment plans requiring everything from medication adherence to a diet and exercise change, health and fitness applications can assist in data collection and reporting back to providers. Check-ins can be arranged or triggered by a large divergence from the plan, potentially increasing the effectiveness of a doctor’s orders.

Weaknesses of Telehealth

Unfortunately, the benefits don’t come without consequence. Digital records and electronic prescriptions and plans are vulnerable to theft and manipulation in the same way that any online information is, and healthcare technology that runs outdated programs can put patients in life-threatening danger. Insurance companies almost exclusively require electronic claims at this point, requiring facilities to submit health records and sensitive patient data through potentially weak channels in order to be reimbursed for care.

Hackers with the right set of abilities, tools, and malicious intent can disrupt day-to-day medical functions by locking practitioners out of patient records, freezing access to drugs, or interrupting service to monitors that help keep patient vitals under control. As it is, many hospitals run old software or don’t take the time to install updates because the interruption in patient monitoring creates needlessly hazardous care conditions. The trade off, though, is that if ransomware were to infect the system, patient care would be interrupted without warning — potentially indefinitely.

Beyond losing monitoring capabilities or access to health records, the emergence of e-prescriptions or remote check ups introduces another factor to be considered: manipulation of data. While it’s unlikely to be utilized in a large scale attack, the manipulation of prescriptions or falsification of drug records could result in a patient being given toxic doses or combinations of drugs. The ability to get prescriptions filled without setting foot in a doctor’s office is attractive, until the order is manipulated en route to the pharmacy, and the patient walks out with an entirely different dosage than intended.

Rural areas that stand to benefit the most from remote care also appear to be more vulnerable to cyberattacks. Without the same financial capital that their metropolitan counterparts have, rural medical centers are more likely to be using old equipment or software simply because they can’t afford to upgrade. There’s not enough cash flow to make state of the art equipment worth it.

Cybersecurity Solutions

In order to keep patient information safe and sound, the health industry has a lot of work to do. Beyond just securing electronic health records, best practices will need cooperation from app developers, software companies, insurance providers, and health sector employees.

First, cloud-based and hard drive storage will need to be secured to keep information safe. Software must be kept up to date while prioritizing patient care; cycling equipment in and out of use for update does not interrupt treatment. Data transmission channels in between various medical facilities and to and from insurance providers must be secured and have limited access available. Data transmission needs to be protected through encryption, and the ability to send the information to the wrong source needs to be minimized.

Medical facilities, even small clinics, need to up their internal security by maintaining proper HIPAA standards, implementing multi-factor authentication systems, and following principles of least privilege when determining how much clearance an employee should receive. Additionally, employees need to be trained in best practices to avoid phishing attempts and minimize the risk of compromising patient data.

Software updates and staff training come with a serious price tag. When allocating funds, budget committees are pressured to put money into revenue-generating projects. Security in general is not often updated unless an incident showcases weakness or a mandate is released by a higher power. In the case of ransomware and patient data, putting money into upping security is a “just in case” move that isn’t often prioritized, especially when hospitals only get fairly reimbursed for 85 percent of care or end up writing off charges when patients can’t pay.

Meeting in the Middle

At the end of the episode, Grey’s Anatomy didn’t pay to have the patient records release, nor did they allow the FBI to track down the nefarious hackers and exact justice upon them. Rather, an administrator found she had an equally gifted hacker among her employees and chose to fight fire with fire.

Obviously, the dramatic conclusion to a television series’ portrayal of a real-life scenario is not common or realistic. Medical facilities will need to be much better prepared and will face far greater consequences in the event of a ransomware attack or the compromise of patient information. In order to truly reap the benefits that big data and telehealth stand to bestow on the healthcare community, players must work together to protect data and increase security against potential data breaches.

Telemedicine’s Potential Could Be Crushed By Security Flaws was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Beyond Psychographics: the Psychological Factors Driving Millennials.

2018-03-13 03:59:46

In the fast-paced world of technology and marketing, many variables are used to inform innovation and design, and to identify potential customers and target markets. Psychographics is one field of study that attempts to quantify such variables as consumer activity, interest, opinion (AIOs), attitudes, values, and behaviour. According to, psychographic segmentation is more important than knowing customer demographics.

Psychographic segmentation helps marketers understand that why — the goals, challenges, emotions, values, habits, and hobbies that drive purchase decisions.

One important variable that is missing from psychographic segmentation is the powerful influence of attachment dynamics, or in layman’s terms, our biologically wired need to connect and stay connected with significant others.

If psychographics is a way of looking at the outer shell of a car, then attachment theory is a way of looking under the hood at the engine components powering the car.

Attachment Theory

Attachment theory attempts to explain how relationships with primary caregivers (usually parents) in the first six years of life and beyond lay the templates for future relationships and how we relate to the world.

More specifically, attachment theory, supported by neuroscience, claims that within the first three years of life our deep brain structures become hard-wired via how our primary attachment figures attend and respond to our pre-verbal cues for affection, stimulus, safety, comfort and soothing.

This theory states that during these formative years children need caring, trusting, stable, predictable, emotionally available and receptive caregivers to develop into healthy adolescents and adults.

If provided with this ideal context, we develop what attachment theorists call a secure attachment style whereby we trust others and ourselves to navigate the challenges of life with flexibility, resilience and emotional intelligence.

In the absence of this ideal context, we can develop an insecure attachment style that hinders our ability to adapt and form meaningful, trusting relationships, and become unduly preoccupied with issues of trust and self-doubt.

The last 30 years have seen important changes to attachment and caregiving practices.

These changes have been reflected by and through new technologies and their usage. Beyond the usefulness of technology to address real world problems and to meet relevant and manufactured needs, for these technologies to take hold they must necessarily reflect unmet needs operating within the individuals embracing these technologies.

Institutional daycare

Historically, relational templates and structures were centralized and trust-based, with working parents placing their children in the care of a trusted family member or friend. Whereas today we have decentralized attachment structures with centralized parenting relegated to decentralized daycare institutions where children as old as one month are placed in the anonymous hands of institutionalized daycares.

This decentralized care giving economy is where anonymous daycare providers are now responsible for providing the primary care for our children. And where children turn to multiple impersonal, anonymous surrogate parental nodes to attend to their emotional-relational needs.

Changes in family structure

In addition, with the increased rates of separation and divorce, the centralized nuclear family is being replaced by the decentralized network of the blended family. The blended family is decentralizing primary attachment figures where more attachment nodes are created with step-parents, step-siblings, step-extended families, etc.

The blended family is less stable, with higher divorce rates than nuclear families, creating a context for increased attachment volatility and anxiety among all family members.

At the heart of this anxiety is the ambiguity around the new familial contract where relational trust (i.e., the need to renegotiate the relational contract beyond blood lines within the family) is front and centre in the evolving dynamic of the blended family.

Peer-attached culture

These significant changes in childcare and family structures are taxing on all family members leading to higher levels of anxiety (which is essentially diffused fear, mistrust, and self-doubt), and an ongoing search for more secure, stable, and predictable relational structures. In the absence of centralized, stable, trusting and predictable parenting and caregiving relationships, many children inadvertently turn to peers for guidance and validation.

This creates a peer-attached culture where only peers are considered as reliable and trusting allies.

Because the decentralized structures of family and caregivers have failed to provide them with the optimal context in which to meet their attachment needs, the emerging peer-oriented culture becomes a defining relational template for this generation.

The psychological fallout from this is a generation of insecure, anxious children turning to other insecure, anxious children to guide them through the labyrinth of adolescence and young adulthood.

Underlying this peer-oriented tendency is a fundamental distrust in the reliability and predictability of a centralized parental authority and its surrogates to provide them with the relational-emotional context they need.

The paradox of peer-oriented relationships is that even though they may feel intense they are more often than not void of any real intimacy.

This is evidenced by the reluctance of many teenagers and young adults to reveal their true vulnerabilities to their peers, even though they claim that these same peers are the most important people in their lives.

AI: Move over real human beings.

One manifestation of this peer-oriented culture is the anxious preoccupation of one’s status on social media platforms. Intimate relationships have been replaced by superficial social media interactions where fear of being truly known and revealed is a defining characteristic of this medium.

The repercussion of this is that self-esteem is now as volatile as weather patterns , at the mercy of innumerable faceless friends’ likes or dislikes of one’s superficial profile activities.

It is no wonder that this generation suffers more from depression, anxiety, loneliness, and boredom than previous generations. However many studies on mental health identify the importance of two to five intimate friendships along with family support as sufficient to buffer people from experiencing such distress.

Furthermore, the developers behind Facebook’s new robot therapist Woebot claim that teenagers and young adults are more comfortable sharing their vulnerabilities with a robot than a real human being. However, this new robot therapist is not bound by a code of ethics nor is your intimate life completely hidden from Facebook.

Woebot’s therapeutic claims are weak at best, and like playing Fifa 2017 on your X-Box may make you better at playing Fifa 2017 on your X-Box, but not necessarily better at playing soccer on a real field with real players. So will interacting with a robot make you better at interacting with robots but not necessarily “better” at interacting with other human beings (see: AI Psychotherapy: My Ideal Therapist?).

Attention deficit-hyperactivity disorder

ADHD is all the rave now with Big Pharma riding on the tails of the new DSM-V diagnostic category medicalizing what is basically one of the many cognitive and attentional “symptoms” of an insecure attachment style. This in part helps us understand the appeal of apps and other technologies alongside media and marketing strategies that exploit this relational-cognitive deficit.

Disruptive technologies

Furthermore, the appeal of emerging blockchain technologies like Bitcoin and Ethereum, with their promises of an immutable decentralized ledger, peer to peer interactions without the oversight or control of a centralized authority, distributed autonomous organizations (DAO), trustless smart contracts and control over one’s privacy are ripe to reassure the preoccupations of this generation.

Political affiliation

The 2016 American election is revealing to the extent that Bernie Sanders, a 75-year-old white male, ran a campaign that appealed to the values, concerns, and preoccupations of this DPA generation.

From financing to volunteerism, his campaign structure was highly decentralized. Mobilizing support through social media platforms that relied heavily on the trustworthiness of peers to mobilize local action and not a centralized organization led by an all-knowing figure head with ties to centralized power structures. As the New York state primary exit polls revealed: 65% of 18–29 yr olds voted for Sanders vs 35% for Clinton.

Finally, our brains and basic needs as a species have not evolved very much over the last 30 thousand years. But what has changed dramatically over the past thirty years are the tools at our disposal to connect with one another. These tools also mirror our evolving attachment practices and relational templates. And understanding the psychological factors underpinning these changes can lead to greater insights into how to best meet the needs of this generation.

Therefore the challenge for responsible innovation, design, marketing, and policy, from an attachment and mental health perspective, will be to meet the needs of this decentralized-peer-attached (DPA) generation in ways that foster more meaningful connections and intimate interpersonal engagement.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

If you are as passionate as I am about human beings and technology, and the future of our civilization, and want to discuss these ideas, please leave a comment and do get in touch. I’m on Twitter @jacquesrlegault

If you enjoyed this article, feel free to clap, share, and comment.

Beyond Psychographics: the Psychological Factors Driving Millennials. was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Coral Protocol: Making Blockchain Less Scary

2018-03-13 03:56:34

Founder Interiew

Interview with David Kuchar and Jon Gillon, CoFounders of Coral Protocol

Coral Website

I was genuinely shocked when Coral Protocol took the stage at the DNA Fund ICO Pitch Day. It was one of those, “I can’t believe that A) this is possible, and B) if this is possible, that no one has done it yet.” This has the potential to be ubiquitous across blockchain technology, while removing some of the friction encountered on the path to mass adoption. Coral Protocol doesn’t hope to replace existing paradigms, it hopes to increase safety and usability on the blockchain for consumers.

A little background

Coral wants to partner up with exchanges and wallets to help protect crypto users. They have the ability to implement their protocol into anything with a transactional nature. Blockchain companies that don’t have a “transactional nature” are companies like Steemit. With Steemit, for example, your transactions are “up-votes” for an article. If you’re paying someone and put in a blockchain address, anytime that happens, is where coral protocol can be implemented.

I can see a future where Coral Protocol is integrated into every blockchain based platform. Where it’s similar to SSL certification or a badge of trust from the Better Business Bureau (BBB).

I met the Coral team at the first ICO Pitch I attended, I covered it in my last article, “The Anatomy of an ICO Pitch Day”. Here’s an excerpt on what I learned about them during their pitch:

Coral is an interoperable blockchain protocol that offers payers of cryptocurrencies a decentralized safeguard against fraud. Coral creates a trust score for every cryptocurrency address, enabling senders to know whether the recipient address is trustworthy while preserving user anonymity and autonomy.

After their presentation I hunted them down, excitedly blabbering about how I wanted to learn more about the project.

You can join the Coral Protocol Telegram channel here and speak with the founders directly

“Right now, about $4 million worth of ETH and BTC are phished every single day, which is essentially a massive bank heist. Every single day. More phishing occurs in blockchain than in any financial instrument, ever. 10% of all funds from ICO’s are lost due to phishing.”


Reza: Can you tell me about your backgrounds?

David: I’ve been a programmer for quite a long time. I’ve been writing code since I was 12, I’m 36 now. It’s been a little while. I started my first company when I dropped out of my PhD program in 2005. I started a small company, travelled the world for quite a while, lived in Thailand for about a year or so in 2007. Then came to Silicon Valley where I started my first payments company. It was in AngelPad, funded by Google Ventures. Then we exited from that after about 3–4 years. I went to work for a wine company, which was pretty frickin fun, if you ever want wine suggestions let me know.

I was a CTO consultant at a firm called SVSG for about 4 years. We were like a McKinsey or Bain but for CTO management consulting.

Then, worked on a second ACH payments FinTech company. Which was venture backed, lead by SoftTech. Exited that after a year, and have been working in blockchain exclusively ever since!

Jon: Dave and I actually met when he was a CTO consultant and I was running Roost. Roost was a peer-to-peer storage and parking company that was acquired last year.

I’ve been in the sales and business development world my whole life. But I’ve always been an entrepreneur. I’ve failed 8 companies now and sold 2. Roost was acquired, and I also exited from DropertyTax, which was a property tax appeal arbitraging company. That was in 2012, in 2014 I started Roost. We expanded to seven cities and got acquired by a company called Spacer. After that I got into blockchain investing. I was fascinated by blockchain. Dave and I, who had become close friends from the time he consulted me with Roost, linked up along with the rest of our team — who are all friends, and started Coral.

Reza: You touched on this in your last answer, but when was the first time you heard about cryptocurrency?

David: I’ve been involved more or less since 2008 when I first started hearing about it at FinTech conferences. I didn’t get really into it, other than downloading a wallet at a conference. I think it was “Future of Money” in San Francisco.

Other than that I wasn’t heavily involved, I was like a bystander. I got involved in 2015–2016 when I started researching more.

Jon: I really got fascinated in cryptocurrencies when I started mining Litecoin in 2013. I had a Litecoin mining rig in my apartment for a couple years.

Reza: When did you start working on Coral Protocol?

David: we started working on the core tech in November. We weren’t talking about it until we got more of the pieces together. We were just in the lab. Now we’re somewhere where we can actually work with people and provide value to exchanges and wallets.

Reza: Before we get into the many facets of what Coral Protocol does, would you mind talking about the problem you’re addressing?

Jon: Right now, about $4 million worth of ETH and BTC are phished every single day, which is essentially a massive bank heist. Every single day. More phishing occurs in blockchain than in any financial instrument, ever. 10% of all funds from ICO’s are lost due to phishing. Which is actually causing some people to forego a public sale. Because it’s unconscionable how much money is lost. 50% of all fraud on the blockchain is phishing.

David: 0.6% of all transactions on the blockchain are phished. Which is interesting because that’s typically what people charge to protect ACH payments. If you look at ACH payments, they usually charge 1% of the transaction — that 1% is all towards fraud prevention.

Reza: For people who aren’t familiar, what is phishing?

David: Phishing is when you’re tricking someone into thinking that you are a trusted third party, such as MyEtherWallet or MetaMask. People will download that app or go to the website thinking it’s the company, then use it as normal, and give them private information. Or make a transaction to an Ethereum or Bitcoin address, and they’ll actually be sending money to a fraudster, not the intended party.

Jon: It’s a bait and switch.

Reza: What’s the quick and dirty elevator pitch for Coral protocol? How are you guys addressing these problems?

Jon: The Coral protocol is a security protocol that builds safeguards around blockchain transactions in order to fight fraud. Safeguards that we build are two fold. One; we built an anonymous blockchain trust score, that can anonymously assign a level of trust to any and every blockchain address. So you can know if this is a trustworthy or untrustworthy address. It prevents you from sending to untrustworthy addresses. We do this by collecting evidence of identity, as well as evidence of fraud. The other level to this, the trust score, is a building block into the blockchain payment protection, which is a restitution system for victims of fraud, and can be purchased on a transaction to hedge against risk of phishing loss.

Reza: If I want to improve my trust score, you guys have a platform that I can log into and verify my identity? Is that the process?

Jon: Yes, you will be able to log in and take steps to improve your trust score, however our goal isn’t to become an ID verified system. The trick is that we have focused on designing a system that works when we don’t know who is control of the address.

David: There are also going to be ways to improve your score on a particular address. You’ll be able to link one address to another. For example; if you have a Bitcoin address, you can link it to your Ethereum address. If you have private keys for both you can piggyback on the score for one, with the other one.

Reza: To be blunt, how the hell did you come up with this? Please correct me if I’m wrong, but it sounds like you’re protecting, and loosely regulating, a decentralized system by implementing another decentralized system to do so.

David: Originally we were looking at creating a decentralized credit bureau. A non-anonymous bureau. Kind of like Bloom, Civic, there may even be a few others. We were looking at that space pretty heavily. We even wrote a white paper to that regard. We eventually stumbled onto the idea of: what if had a credit score that didn’t try to bridge the gap between the traditional financial services industry and the new one.

Why not make one that’s fully blockchain centric, that relies only on the new world and ignores the old. The first thoughts around that were; what if the credit score was just based on your blockchain address, which could be anonymous, and nothing else.

So that’s the idea, and version 2 of this will be providing restitution to people who are actually phished. Later on, services like escrow, or a full clearing house with guaranteed payments. That’s the long term vision, at least 2–3 years out.

“Blockchain Payment Protection will be a layer that is leveraged behind the scenes by wallets and exchanges, to protect certain payments. They can pay a fee to code the protocol into transactions, to protect users and the exchanges from fraud. If a payment is made that they want to protect, they can do that.”

Reza: What’s your main focus right now: are you in development, fundraising, looking for partners?

Jon: Product and Partnerships are number 1 and number 2. We’re building product like crazy, and we’ve just signed deals with our first partners. We’re also doing typical customer development with some exchanges. They’ll be a good source of data to help us understand the endpoints of the system better, the on-roads and off-roads. We’re also doing a token sale soon. We haven’t set a final date or the terms for it, but we’re ramping up.

Reza: and you guys are doing a private sale?

David: That’s the plan, I don’t think we’ll do a full public token sale at this time. I think we will do private round with people who are potential partners, or have a use for the token, strategic relationships.

Reza: Who are your potential partners and what does being a partner to Coral protocol mean?

David: The partners we’re targeting are people who run wallets and exchanges, like Jon said, the on-roads and off-roads into blockchain. We need them to be partners for several reasons. We need them to give us evidence that they are in custody of addresses they create, because those addresses will be freshly baked and won’t have any past history. That’s all strategic because it helps us build our data set. They would spend our REEF tokens by pulling down API requests, and would also be our customers.

Jon: We also work with decentralized applications that have a transactional nature. Anywhere that a blockchain address is inputed, Coral should be integrated.

Reza: If I am an exchange and I want to partner up with you guys, that means that all the transactions on my exchange are protected by Coral protocol?

David: To be clear, Blockchain Payment Protection (BPP) is version 2. Version one is the Anonymous Blockchain Trust Score, and we have to make version 1 work before we can get to version 2. BPP could be a layer that is leveraged behind the scenes by wallets and exchanges to protect certain payments. They can pay a fee to code the protocol into transactions, to protect users and the exchanges from fraud. If a payment is made that they want to protect, they can do that. They can also make it an option for customers, where a customer can opt-in to payment protection and pay a larger transaction fee in exchange for some protection.

Jon: Most of our focus is on the actual trust score. It already works, and will continue to improve through more data and more partnerships.

We have a lot of work to do, but if successful, we will have helped blockchain become the truly global monetary system that until now has only been dreamed about.

Reza: So you guys are going to make blockchain safer for the average user?

Jon: I send multiple blockchain transactions per day and every time I do I have a mini panic attack. You should not have to get used to this fear, it should be eliminated. Blockchain is all about trust and guarantees. Sending transactions is terrifying and that is a major roadblock for a lot of people. It’s the wild wild west, people are getting robbed left and right and it’s unacceptable. Coral Protocol is here to make the blockchain safe, and give peace of mind to its users. I think that if fear of losing your money is taken out of the equation, then blockchain can reach a much higher level of adoption and ultimately mass adoption. To where it can actually achieve its intended transformative nature.

Reza: Five years from now, if you guys are 100% successful in executing everything you want to, what does the blockchain environment look like to you?

David: I think that the blockchain will be a lot safer. Right now, blockchain is reminiscent of “Catch Me if You Can”, where Leo is running around as an airplane pilot cashing fake checks left and right. That all stopped with fraud prevention measures that were put in place in the late 70’s.

That’s what what we’re doing with Coral. We’re laying the groundwork for universal consumer protections in blockchain. We have a lot of work to do, but if successful, we will have helped blockchain become the truly global monetary system that until now has only been dreamed about.

Coral Protocol: Making Blockchain Less Scary was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more