Great engineers practice defensive communication

2018-05-16 20:42:30

Good engineers I’ve worked with over the years have always had one thing in common — they prefer precise communication. I do too.

Good engineers will demand that requirements and specifications are spelled out exactly and then make sure that they meet all the criteria perfectly. But great engineers communicate defensively.

I don’t mean defensive as in “it wasn’t my fault” — I mean defensive as in defensive driving.

As an engineer it’s quite natural to apply the engineering mindset to pretty much everything that’s happening around you — and we’ve actually seen this have tremendous results in non-engineering disciplines like sales, marketing, and fundraising, and even PeopleOps (I mean come on, PeopleOps is totally an engineering phrase)

The one place where I’ve seen the engineering mindset fail over and over is in workplace communication, and I’m the first to admit that I likely fail at this at least once a day for as long as I can remember.

One of the biggest improvements, however, for me came from a simple realization.

If you want to engineer precise communication, you have to practice defensive communication.

One of the biggest mistakes I used to make was to assume that incoming communication was “the truth, the whole truth, and nothing but the truth” — aka exactly what I “needed to know” or exactly what I “needed to do”.

I assumed that I was getting a spec that was always perfect.

If I was a driver, that would be like assuming that everyone always stops at a red light. You can’t assume that if you want to drive safely, you still have to watch the road and correct for other drivers’ mistakes to avoid accidents.

That’s the basic idea of defensive driving — to assume that other drivers are always making mistakes, but instead of painting a giant middle finger on your windshield, you defensively anticipate and check for them as you’re driving.

If everyone does this, the number of accidents is dramatically reduced.

Pretty much every communication accident I see on engineering teams can be boiled down simply to the fact that the people in question didn’t communicate defensively.

If the first driver assumes that the second driver will pay attention to their rearview mirror, some accidents happen.

If the second driver assumes the first driver will always flash their blinkers before turning, some accidents happen.

But it’s guaranteed to be safe if both assume that the other is not.

Here’s an example for an engineering communication accident

PM: This feature should guarantee that A=B and B=C

Engineer: Ok it’s done

PM: Uh your code is sloppy because A does not equal C!

Engineer: That wasn’t in the spec. It’s your fault.

PM: It’s your fault. You should have inferred that A=C from the spec.

Here’s a defensive version of the same communication.

PM: The goal of this feature is to make sure that A, B, and C are all equal. So we’ll need to check A=B, B=C, C=A, and maybe more depending on how you choose to implement it.

Engineer: It’s done. Since your specified checks didn’t assume that order matters, I’ll also guaranteed the reverse too, like B=A, because I’d expect that to be part of the stated goal of the project, even though it was left out of the spec. Let me know if that’s not the case.

PM: Sweet, thanks for catching that!

Do you agree about this style of communication? Let me know your thoughts.

About the author: Vinayak is the founder and CEO at Drafted, the referral network. You can find his other writing on LinkedIn and the Drafted blog

Great engineers practice defensive communication was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Web Scraping With Google Sheets

2018-05-16 20:32:25

Web scraping and utilizing various APIs are great ways to collect data from websites and applications that can later be used in data analytics. There is a company called HiQ that is well known for web scraping. HiQ crawls various “Public” websites to collect data and provide analytics for companies on their employees. They help companies find top talent using sites data like Linkedin, and other public sources to gain the information needed in their algorithms.

However, they ran into legal issues when Linkedin asked them to cease and desist as well as put in certain technical methods to slow down HiQ’s web crawlers. HiQ subsequently sued Linkedin and won! The judge said as long as the data was public, it was ok to scrape!

Image from commit strip (Here)

Web scraping typically requires a complex understanding of HTTP requests, faking headers, complex Regex statements, HTML parsers, and database management skills.

There are programming languages that make this much easier such as Python. This is because Python offers libraries like Scrapy and BeautifulSoup that make scraping and parsing HTML easier than old school web scrapers.

However, it still requires proper design and a decent understanding of programming and website architecture.

Let’s say your team does not have programming skills. That is ok! One of our team members recently gave a webinar at Loyola University to demonstrate how to scrape web pages without programming. Instead, Google sheets offer several useful functions that can help scrape web data. If you would like to see the video of our webinar it is below. If not, you can continue to read and figure out how to use Google Sheets to scrape websites.

Google Sheet Functions For Web Scraping

The functions you can use for web scraping with google sheets are:

  • ImportXML
  • ImportHTML
  • ImportFEED
  • ImportDATA

All of these functions will scrape websites based off of different parameters provided to the function.

Web Scraping With ImportFeed

The ImportFeed Google Sheet function is one of the easier functions to use. It only requires access to Google Sheets and a URL for a rss feed. This is a feed that is typically associated with a blog.

For instance, you could use our RSS feed “".

How do you use this function? An example is given below.

“=ImportFeed( “")

That is all that is needed! There are some other tips and tricks that can help clean up the data feed as you will get more than just one column of information. For now, this is a great start at web scraping.

Do The Google Sheet Import Functions Update?

All of these import function automatically update data every 2 hours. A trigger function can be set to increase the cadence of updates. However this requires more programming.

This is it in this case! From here, it is all about how your team uses it! Make sure you engineer a solid data scraping system.

The picture above is an example of of using the ImportFeed function.

Web Scraping With ImportXML

The ImportXML function in Google Sheets is used to pull out specific data points using HTML ids, and classes. This requires some understanding of HTML and parsing XML. This can be a little frustrating. So we created a step by step for web scraping for HTML.

Here are some examples from an EventBrite page.

  1. Go to
  2. Right Click Inspect Element
  3. Find the HTML tag you are interested in
  4. We are looking for <div class = list-card__body> Some Text Here</div>
  5. So this is the tricky part. The first part you need to pull out from this HTML tag is the type. This would be like <div>, <a>, <img>, <span>, etc. This first one can be called out using “//” then the tag name. Such as “//div”, “//a” or “//span”.
  6. Now, if you actually want to get the “Some Text Here” you will need to call out the class.
  7. That is done in the method shown in step 5. You will notice it combines using “//div” with the “[@class=”class name here”].
  8. The xml string is “//div[@class=’list-card__body’]
  9. There is another data value you might want to get.
  10. We want to get all the URLs
  11. This case would involve wanting to pull out the specific value inside of the first HTML tag itself. For instance, <a href=”https//">Click here</a>.
  12. Then it would be like step 7.
  13. The xml string is “//a/@href
  14. ImportXML(URL, XML String)
  15. ImportXML(“",“//div[@class=’list-card__body’]”)

The truth about using this function is that it requires a lot of time. Thus, it requires planning and designing a good google sheet to ensure you get the maximum benefit from utilizing. Otherwise, your team will end up spending time maintaining it, rather than working on new things. Like in the picture below

From xkcd

Web scraping With ImportHTML

Finally we will discuss ImportHTML. This will import a table or list from a web page. For instance, what if you want to scrape data from a site that contains stock prices.

We will use the There is a table on this page that has the stock prices from the past few days.

Similar to the past functions you need to use the URL. On top of the URL, you will have to mention which table on the webpage you want to grab. You can do this by utilizing the which number it might be.

An example would be ImportHTML(“",6). This will scrape the stock prices from the link above.

In our video above, we also show how we combine scraping the stock data above and melded it with news about the Stock ticker on that day. This could be utilized in a much more complex manner. A team could create an algorithm that utilizes the stock price of the past, as well as new articles and twitter information to choose whether to buy or sell stocks.

Do you have any good ideas of what you could do with web scraping? Do you need help with your web scraping project? Let us know!

Other great read about data science:

What is A Decision Tree

How Algorithms Can Become Unethical and Biased

How To Develop Robust Algorithms

4 Must Have Skills For Data Scientists

Web Scraping With Google Sheets was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Mastering MongoDB - Ahhh …! Someone just dropped a collection

2018-05-16 19:56:26

Mastering MongoDB — Ahhh …! Someone just dropped a collection

“A man stretching out his two hands with careless written on them in black and white” by Mitchel Lensink on Unsplash

Have you come across a situation where ‘someone accidentally dropped an entire collection in production?’ Unless you can restore the data from a backup, you are in a terrible situation.

I came across a few clients who faced this situation at one point in time. So, it is important that you take security measures to prevent such situation from happening in the first place. You can easily achieve it by making use of the user-defined roles.

This is one of the many articles in multi-part series, Mastering MongoDB - One tip a day, solely created for you to master MongoDB by learning ‘one tip a day’. In a few series of articles, I would like to give various tips to tighten the security on MongoDB. In this article, I would discuss how an user-defined role can help prevent someone accidentally dropping a collection.

Mastering user-defined role

What is a role

MongoDB employs Role-Based Access Control (RBAC) to govern access to a MongoDB system. A role grant users access to MongoDB resources. Outside of role assignments, the user has no access to the system.

Here are a few concepts that I want you to be aware of

  • A user is granted one or more roles.
  • A role grants privileges to perform sets of actions on a resource.
  • A privilege consists of a resource and the actions permitted on them.
  • A resource is a database, collection or set of collections.
  • An action specifies the operation allowed on the resource.

Why user-defined role

MongoDB provides a number of built-in roles that administrators can use to control access to a MongoDB system. Every database includes the following roles

Database User Roles

  • read
  • readWrite

Database Administration Roles

  • dbAdmin
  • userAdmin
  • dbOwner

However, if these roles cannot describe the desired set of privileges, you can create a new user-defined role by using the db.createRole() method. While creating a role, you can specify the set of privileges you want to grant access.

readWrite role has dropCollection action

The database administrators typically make use of the built-in ‘read’ and ‘readWrite’ roles to restrict the access to data. The below ‘getRole’ command shows the various set of actions a user with ‘readWrite’ role can execute.

Based on the context of the current article, the riskiest action among them is the ‘dropCollection’ action. If there is truly a need for (a human) user to have a read & write permission, it is recommended to have a user-defined role with all the actions but the ‘dropCollection’ action from ‘readWrite’ role. By assigning this user-defined role to such users, the administrators prevent someone accidentally drop a collection.

Hands-On lab exercises

This lab exercise helps you create a user-defined role and illustrate how readWriteMinusDropRole can prevent someone accidentally drop a collection when compared to a user with readWrite role.

Setup environment

First, you would need an environment to play around. I recommend using mlaunch, a utility tool from mtools, to setup a test environment on your local machine. If you already have an environment with authentication turned on, you may skip this step.

Create app_user with readWrite role

Log in to the test environment using the above credentials and create app_user with readWrite role on the social database.

A user with readWrite role can drop collection

Log in to the test environment using the app_user credentials and create a sample person document on the social database. Since the app_user has readWrite role which grants access to dropCollection action, the command db.person.drop() execution succeeds and the collection is dropped.

Create a user-defined role

Log in to the test environment using the user credentials and create the following on the social database.

  • a user-defined role, readWriteMinusDropRole
  • a user human_user with readWriteMinusDropRole role

Notice that there is no dropCollection action from the set of actions being granted to readWriteMinusDropRole.

User with readWriteMinusDropRole role cannot drop collection


While MongoDB provides various built-in roles that provide the different levels of access commonly needed in a database system, you would need user-defined roles to grant fine-grained actions to MongoDB resources.

Don’t wait till you regret of not doing it earlier. Tighten your security measures using user-defined roles and prevent someone accidentally drop a collection.

Here are a few measures taken by my clients to tighten the security.

  • No access to the production environment for developers (more drastic)
  • If access is required, give ‘read’ role to developers (much needed)
  • Create a user-defined role with all ‘readWrite’ actions but ‘dropCollection’
  • If ‘read & write’ permissions is required for any users, assign above user-defined role (highly recommended)
  • Create a separate app_user with ‘readWrite’ permissions for your application to interact with MongoDB

With the MongoDB v3.6, you could further tighten the security by defining the range of IP addresses a user is allowed to authenticate from using authenticationRestrictions. But that’s a topic for another day. Hopefully, you learned something new today on you scale the path to “Mastering MongoDB - One tip a day”.

Mastering MongoDB - Ahhh …! Someone just dropped a collection was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

The Value of a Brand for Tech Companies:

2018-05-16 19:55:14

VC: “Where’s the defensibility?”
Founder: “We have a great brand.”
Awkward silence…

I’m sure many a VC or Founder can relate to this exchange. VCs want to invest in companies that have network effects, strong lock in, or a competitive edge in technology or distribution. I have a sneaking suspicion, however, that today many VCs undervalue brands and place too high premium on technology. I first began thinking about this because I noticed that more and more founders were alluding to “brand” during pitches. And given that founders are the best proxy for where opportunities lie, I thought it was worth paying attention to. USV’s recent thesis 3.0, which they recently published, made me more confident in this supposition.

I think there are three main reasons why brands are becoming an increasingly important component of high-growth technology businesses:

1.) The role of brands in society is evolving. More and more, consumers look to brands that align with their values and this emotional resonance breeds loyalty. Furthermore, in a world of massive informational overload — the average consumer is exposed to 3000 brands a day — trusted brands act as an effective heuristic for parsing through noise.

2.) The strength of technological moats has weakened due to the accelerated speed of innovation and the availability of global capital. Global VCs can fund proven business models at the drop of a hat and the major tech companies are constantly infringing on each other’s turf. Facebook copied Snapchat with ease, Amazon looms large over everyone, and Chinese and American bike-sharing companies duke it out with massive war chests. As Arjun Sethi wrote in this piece, the moat you have today simply grants you runway to create the next killer product. As these moats shrink, the distribution of value in the stack of a technology company shifts:

Indeed, as I sat down to write this over the weekend the merits of competitive moats were being debated by none other than Musk and Buffet. Buffet likes to invest in recognisable brands, such as Coca Cola, because of the moat they provide while Musk opined that moats are “lame” and companies should stay competitive through innovation. In this case, I think Musk is underestimating the moat that his personal brand has contributed to his companies.

3.) Distribution channels have opened up. The greatest challenge for a startup today, whether it’s a mattress company or a musical artist, is to foster awareness. Anyone can market a product through Instagram and distribute through Amazon but awareness and a loyal fan base, aka a brand, is invaluable. The theory of a long tail of winners has not manifested. We live in a blockbuster world: Kylie Jenner did $1BN in sales for her makeup line last year; Beyonce achieves goddess status; Tesla is overvalued because of Elon’s fan club; and WeWork justifies its sky high valuation with its “spirituality.”

Traditionally, VCs have not placed much value on “brand” because its value is intangible and difficult to quantify. The same goes for many public equities investors. But it’s clear from recent success stories, like Revolut and Robinhood, which achieved billion dollar valuations in record time that their nascent brands played an important role in their rapid growth. Indeed, Apple, perhaps the most recognised technology brand in the world, also happens to be the most valuable company in the world. Moving forward, investors and entrepreneurs will need to develop new mental models to better understand the role of brands in technology companies and early-stage startups. What exactly those mental models might be? Well, that’s for another post since I’m still figuring that part out!

If you enjoyed the post, please clap so others discover it. Cheers.

The Value of a Brand for Tech Companies: was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Deploying Angular Universal v6+ with Firebase

2018-05-16 18:51:01

Disclaimer: This blog post will be a focused step-by-step tutorial of how to deploy an Angular Universal App using Firebase Hosting. For any explanations about Angular Universal and Server Side Rendering, Angular has a great documentation on their website.

You can also find the source code on Github.


  • node.js (I am using v8.11.1 for this tutorial)
  • Angular 6+ (I have written a similar article for deploying Angular < v6)

Part I: Set Up Angular App 🛠

1. Install global dependencies

We are going to use @angular/cli and firebase-tools in command line to build and deploy your app.

2. Create a new Angular project

Using @angular/cli , we are going to create a new angular app. In this case, I will name it angular-universal-firebase .

3. Install @angular/platform-server

To build and render your universal app, we need to install @angular/platform-server .

4. Add Server Side Rendering Config

In @angular/cli@v6.0.0+ , .angular-cli.json is changed to angular.json . This defines how our project is structured and the build configurations for this project. We would want to add a server configuration for the project in the projects.PROJECT_NAME.architect path.

Note that we’ve added server that defines the builder and options config for the server side version of our app.

5. Modify project output to Firebase folder

For simplicity, we will build the browser version of our app in the same directory as where we are building our sever version to be server side rendered in Firebase. To do this, edit angular.json ‘s to functions/dist/browser.

6. Create necessary files for app server version

  • src/app/app.server.module.ts

Create a new module for the app’s server version.
  • src/main-ssr.ts

Create an entry point for the server module. This is the main file we referenced in the server configuration in angular.json .
  • src/

Create the tsconfig for the server version. Similar to the browser version except angularCompilerOptions.entryModule which will reference the entry module for the server version that we just created. This is also referenced in angular.json as tsConfig.

7. Include server transition in app’s browser module

Since we are sending the server version of your app to the browser before the browser version, we need to add .withServerTransition({ appId }) when adding BrowserModule in imports.

Now we are ready to build the server and browser versions of our app!

8. Build browser and server versions of the app

Using @angular/cli, we will build the two versions of the app.

  • ng build --prod: This will build the browser version of the app with prod configurations.
  • ng run PROJECT_NAME:server: This will build the server version of the app. It will generate a ngFactory file that we can use to render our app using node.

When both builds are done, you should now have a functions folder in your root directory with browser and server folders in it. Awesome!!! 🎉

Part II: Deploying with Firebase 🚀

[1] Before continuing, you should have had created a firebase project here. I named mine angular-universal-firebase for this case.

1. Log in to `firebase` in the command line

Log in to firebase in the command line with the same google account you used to create your firebase project in [1].

2. Initialize Firebase in the `angular` project

Initialize firebase configurations through the command line:

  • Select Functions and Hosting for features to set up
Firebase set up configuration (Functions and Hosting)
Javascript as Cloud function language for simplicity
  • Select the firebase project you created in [1]. (In my case, it’s angular-universal-firebase.
  • Accept all defaults in this stage; we will configure the rest in later steps. (In this tutorial, we will write our functions in Javascript).

3. Add package dependencies to `functions`

Since we are using a node server through firebase-functions, We need to include angular dependencies in functions/package.json to render the server version of the app.

Aside: Right now, I don’t know any way to mitigate this duplication of dependency declaration since as far as I know, you can’t access files outside the functions directory in any firebase-functions javascript files. But if you know a way, please let me know!

4. Install packages in `functions` directory

Install da dependencies!

5. Create Firebase function to serve the app

We are going to use functions.https.onRequest Firebase function type to send the response from our express server. There are a lot of things going on in this file but the most notable are:

  • Importing AppServerModuleNgFactory which was generated in Part I: Step 8 — server version.
  • Creating an index variable which is getting the index.html file we generated from Part I: Step 8 — browser version.
  • Using renderModuleFactory to generate an html file that we send as a response with url and document parameters.
  • url parameter determines which route of the app is going to be rendered. Specifying this allows renderModuleFactory to build the html of that route.
  • document is the full document HTML of that page that should be used to render. In this case, it will be the browser version of index.html of the app.

7. Configure Firebase hosting

Now that we have built the function to render pages, we need to change the firebase hosting configuration to use this function. Change the hosting.rewrites in firebase.json.

8. Rename`public/index.html` from root directory

This is so Firebase won’t serve the html file but rather run the ssrfunction. You can rename it to whatever name other than index. We can’t simply delete this file since Firebase would not deploy with an empty public directory. For simplicity, I will rename public/index.html to public/index2.html.

9. Deploy to Firebase 🚀 🔥

If all things went well, you should be able to deploy your app to Firebase:

That’s it! 👍

You can check out the source code on Github.

I hope this tutorial was helpful in some way! If you have any feedback or questions, add them on the Github issues to ensure everyone looking at the code would benefit. 😄

Happy coding! 😃

Deploying Angular Universal v6+ with Firebase 🚀 🔥 was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Communication between components

2018-05-16 18:48:17

Photo by Colwyn on Flickr

Communication is important, especially neighborhood communication 😎 Let’s collect communication approaches between components.

I remember a time when I jumped to Angular and while developing I had to find the best way to communicate between components. In Angular we may dispatch events ($emit, $broadcast), in React use Redux etc. Each framework has own approaches, but we may have general solutions no matter of framework, web components. So, let’s collect them.

We are going to talk about neighborhood communication 😎

Custom Events

Custom Event is a good approach to dispatch events and listen. You should listen event from target Element, Document, and Window, but the target may be any object that supports events (such as XMLHttpRequest). It does not work in IE but for that we have a Polyfill solution.

So, Custom Event Service is going to look like that:

Communication phase:


The Publish/Subscribe pattern encourage us to think hard about the relationships between different parts of our application.

Publish/Subscribe pattern saves a TOPIC name and reference to a callback. When you publish the TOPIC it calls the callback.

👏 Thank you for reading. Suggestions, comments, thoughts are welcome 👍

If you like this, clap, follow me on medium, twitter, share with your friends 😎

Communication between components was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

16/05/2018: Biggest Stories in the Cryptosphere

2018-05-16 18:38:08

By BlockEx

1. Consensus Panellists Driven Out of New York by Bitlicense

Jesse Powell, the CEO of Kraken, and Erik Voorhees, the CEO of Shapeshift, discussed how they had been driven out of New York by the regulatory overreach of the Bitlicense. Voorhees complained that: “Here we are two miles from the Statue of Liberty and you cannot sell CryptoKitties in the state without that license. That’s the absurdity of what’s happened here”. Powell thinks the US “has really failed” by letting this be regulated at a state rather than national level. He pointed to the clarity that national Virtual Currency Act had brought the industry in Japan, and the crypto business boom this has brought. however, both pointed out that crypto businesses tend to be highyl mobile, not linked to any one location. Users can also use workarounds like VPNs to access services that may not legally be available to them.

2. Blockchain Standards Unveiled by Enterprise Ethereum Alliance

The 500 strong group released a common technical specification on Wednesday. This will help to connect development efforts across the enterprise-focused, ethereum-based initiative. The group connects companies with Ethereum experts. Their aim is to “define enterprise-grade software capable of handling the most complex, highly demanding applications at the speed of business.” The group is important as it works with the Ethereum community to help institutions take part in the revolutionary possibilities of Ethereum.

3. Bitcoin Risks Falling Below $8K

Bitcoin had a minor price rally from Saturday, but that has now unravelled. It now risks falling below the $8,000 mark. Bitcoin has now dipped below the 50 day moving average. Traditionally, the Consensus conference in New York, one of the biggest in the industry, has seen cryptocurrencies rally, but that does not seem to be the case so far this week.

This news roundup was brought to you by BlockEx.
To receive our daily news roundup in your mailbox, sign up here:

16/05/2018: Biggest Stories in the Cryptosphere was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

What Are Hashed Timelock Contracts (HTLCs)? Application In Lightning Network & Payment Channels

2018-05-16 18:26:01

Video Version

Definition From Bitcoin Wiki :

A Hashed TimeLock Contract or HTLC is a class of payments that uses hashlocks and timelocks to require that the receiver of a payment either acknowledge receiving the payment prior to a deadline by generating cryptographic proof of payment or forfeit the ability to claim the payment, returning it to the payer.[1]
The cryptographic proof of payment the receiver generates can then be used to trigger other actions in other payments, making HTLCs a powerful technique for producing conditional payments in Bitcoin.

HTLCs In Payment Channels :

HTLCs allow payments to be securely routed across multiple payment channels which is super important because it is not optimal for a person to open a payment channel with everyone he/she is transacting with.

HTLCs are integral to the design of more advanced payment channels such as those used by the Lightning Network.

For example:
If Alice has a channel open to Bob and Bob has a channel open to Charlie, Alice can use a HTLC to pay Charlie through Bob without any risk of Bob stealing the payment in transit.

Let’s understand step-by-step how the transaction would unfold:


1. Alice wants to buy something from Charlie for 1000 satoshis.

2. Alice opens a payment channel to Bob, and Bob opens a payment channel to Charlie



Charlie generates a random number(x) and generates its SHA256 hash — h(x).


STEP 3 :

Charlie gives the hash generated to Alice.



Alice uses her payment channel to Bob to pay him 1,000 satoshis, but she adds the hash Charlie gave her to the payment along with an extra condition: in order for Bob to claim the payment, he has to provide the data which was used to produce that hash.



Bob uses his payment channel to Charlie to pay Charlie 1,000 satoshis, and Bob adds a copy of the same condition that Alice put on the payment she gave Bob.




B →C (Transaction in step 5 goes through)

Charlie has the original data that was used to produce the hash (called a pre-image), so Charlie can use it to finalise his payment and fully receive the payment from Bob. By doing so, Charlie necessarily makes the pre-image available to Bob.


A→B (Transaction in step 4 goes through)

Bob uses the pre-image to finalise his payment from Alice

In the example above we talked about a special case in which we had one intermediary (Bob) to hop through for our payment to reach the destination.
This method can be extended so that we can hop through more than one intermediary which is essential for public/mass usage.

Side-Note :

The ideal cryptographic hash function has five main properties:

  • it is deterministic so the same message always results in the same hash
  • it is quick to compute the hash value for any given message
  • it is infeasible to generate a message from its hash value except by trying all possible messages
  • a small change to a message should change the hash value so extensively that the new hash value appears uncorrelated with the old hash value
  • it is infeasible to find two different messages with the same hash value

What Are Hashed Timelock Contracts (HTLCs)? Application In Lightning Network & Payment Channels was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Good Process vs. Bad Process

2018-05-16 17:41:17

Busy with the newborn, but still pondering, tweeting, writing some thinky posts (influenced by the lack of sleep, no doubt), and occasionally doing short, low fidelity podcasts).

Last night’s sleepy list…when is process “good” ?

Good process (is) | Bad process (is)

Encourages mindfulness | Encourages mindlessness

Flexible to local concerns | Inflexible to local concerns

Adaptable, frequently challenged/improved | Set in stone. “Just because…”

Mostly “pulled” because it is valuable | Mostly “pushed” on to participants

Core principles understood | Automatic/forced adherence

Encourages conversations/collaboration | Reduces quality/quantity of conversations

Co-created/designed with “users” | Designed in vacuum and imposed

Value to all participants | One-sided value

Increases confidence in outcomes | Detached from outcomes

Distilled to core “job” (lightweight) | Burdened by many jobs/concerns

Achieves desired consistency with minimal impact on resiliency. Improves global outcomes. | Achieves consistency to the detriment of global outcomes / long-term resilience

Delivers value to end-customers | Disconnected from customer value

Guide/tool/navigate/remind | Control/direct

Enhances trust/safety | Trust proxy, safety proxy

Good Process vs. Bad Process was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Crypto Moats

2018-05-16 17:23:06

We are in the early stages of designing and deploying cryptocurrencies, and if you believe the Fat Protocol Hypothesis, there are billions of dollars at stake. Naturally, cryptocurrencies and their adherents will seek to defend their positions from challengers. Fundamentally this is a very different game in the crypto space than it is in traditional business because of the open source nature of cryptocurrencies and the ability to fork blockchains. Together, these things mean the cost to compete with cryptocurrencies is extremely low.

In this environment how do cryptocurrencies seek to defend themselves from competition? What are the defensible competitive advantages, or moats, that cryptocurrencies have and can cultivate? These are questions that I seek to answer in this article.

Defensible competitive advantages

Here is a list of moats that give a cryptocurrency a defensible competitive advantage. Please comment on any blind spots I’m missing here.

Superior brand

Cryptocurrencies have reputations just like firms do. The actions they take, the people associated with them, and the language used to describe them are important in shaping users’ preferences. Describing Bitcoin as digital gold has salience to the average person, which has made the narrative stick. In turn, this narrative has driven millions of dollars into Bitcoin. Likewise, Litecoin being the “digital silver” to Bitcoin’s digital gold significantly contributes to its success.

Moreover, teams of developers and de facto leaders of projects are important. Ethereum is inexorably tied to Vitalik Buterin, Zcash is tied to Zooko Wilcox-O’Hearn, Bitcoin Cash to Roger Ver, and Bitconnect is tied to this guy.

These connections color investors’ choices. An investment in Ethereum right now is, in part, a bet on Vitalik and co shepherding Ethereum through its scaling pains. Likewise, how you feel about a particular person might dictate whether you buy the original currency or a forked version. The future of the crypto space is more political than we like to admit.

Lastly, cryptocurrencies are constantly under intense competition and there is pressure to either evolve or die. The ways that cryptocurrencies respond to this competition will give them a reputation. When a new cryptocurrency encroaches on an old one’s territory, what did it do? Was it accommodating and did it extend an open hand for collaboration? Or did it take aggressive action and invite provocation? The old cryptocurrency could fork the desirable parts of the new one, strategically dump its assets, or even buy out any newcomers. In this way, a cryptocurrency’s reputation can act as a moat that keeps new entrants away.

Superior developers

Cryptocurrencies will die or thrive as a result of their developers. Whether they are a cohesive team an initial coin offering brings, acting under the purview of a foundation, or are simply anonymous contributors, these are the people who will drive the future direction and upgrading of their respective cryptocurrencies. In a nascent and fast moving space developers who are able to separate signal from noise and execute are highly desirable. Moreover, a lot of technology in the space will be open sourced. Knowing how to navigate the complex tradeoffs inherent in many of the new technologies in the space, having a clear vision, being able to articulate that and deliver on it are more just as important as the technologies themselves. Due to this developing talent as well as critical thinking and execution ability will demand a premium.

Partially/fully closed source code

Cryptocurrencies might withhold some or all of their code in the future to keep competitors from taking their code. Spencer Noon touches on this in his post The Persistent Forker but developers could put their code in a “black box” that was able to prove the code did not change over time. That way competitors would not be able to fork a working copy of the aforementioned cryptocurrency.

Rightfully Spencer points out this is anathema to a core tenet of cryptocurrency: no trusted third parties. I agree and I think that in the long term this isn’t a tenable position, but I think that people are willing to to accept a “black box” in the short term if a team will credibly commit to open source at a later date. The reason being is that gives teams a chance to entrench their cryptocurrencies through launching a product, driving adoption, and establishing network effects. In turn these should have a positive effect on their expected returns.

An important caveat here, as Spencer points out, is that this wouldn’t be acceptable for stores of value. Part of what makes Bitcoin Bitcoin is that you can have absolute certainty in its properties. A hidden section of the code could introduce centralization, add inflation, etc. A number of ICOs are inadvertently doing this right now. To some degree this reflects how new these platforms are, but I think there will be a reckoning when some teams launch a product and try to keep some/all of their code closed source.

Life span

Nassim Taleb introduces the idea of the Lindy Effect in his book Antifragile. The Lindy Effect states the future life expectancy of non-perishable assets is proportional to their current age. In other words, the longer something has been around the longer we can expect it to stay around.

For cryptocurrencies this is important for a few reasons. The entire industry is still nascent with dozens of new assets emerging daily, all of them intensely fighting for users, developers, and the attention of the community. To survive a meaningful amount of time in this competitive environment is itself valuable and a defensible competitive advantage.

Further to this, the longer an asset has been around the more it has been battle tested for vulnerabilities. Cryptocurrencies are the largest bug bounties ever created. An enterprising hacker could in theory grift off billions of dollars if they were to successfully exploit a vulnerability. So far, the primordial Bitcoin has survived for nearly a decade. In the long view of history that isn’t very much time, but it is worth something when compared to its fledgling month old competitors, especially when you take into account the amount of money that has been at stake for Bitcoin.

What’s more, a cryptocurrency being around for longer allows for norms to be soundly established. Norms are the unwritten rules of society which shape the behavior and expectations of agents within a system. They are often nebulous, hard to define, and even harder to establish. That is why it can be valuable when a cryptocurrency has a track record and clear norms can be identified.

As an example, after a decade the rather unintuitive number of 21 million coins is hard coded into the culture of Bitcoin. Bitcoin’s community is passionate, even religious at times, and any proposal to change the 21 million limit would be vehemently defeated. There is something inherently valuable that comes with that certainty.

Something that is unfolding in real time is the response to the failure of Parity’s smart contract. In the case of large scale failures, like the DAO hack or the lockup of nearly half a billion dollars in Parity’s case, it is tempting to execute a bail out and reverse the events that transpired. But with each time this is done a norm is increasingly solidified that if enough money is at stake immutability can be swept away to recover that money. Depending on how important you believe immutability is, that can be value adding or value destroying.

A last example and a cautionary message, there is a very real chance that some technologies might break in the future. Zcash uses a technology called zk-SNARKS, which are new and relatively unproven. One participant in the Zcash ceremony highlighted this in the quote below:

The cryptography behind Zcash is both highly experimental, and relatively weak. Fact is, if zk-SNARKS turned out to be totally broken, unlike more mainstream crypto, it just wouldn’t be all that surprising:

The important thing to take away here isn’t that zk-SNARKS are useless and likely to break, it’s that we have less certainty about them because they are so new.

Network effects

eBay is a useful service precisely because so many buyers and sellers gather there. Likewise, cryptocurrencies are useful insofar as they aggregate many different parties on one platform. The value of this aggregation is proportional to how many parties it brings to the table. This relationship is called Metcalfe’s law, which in its most simple form states the that the value of a network is equal to the number of nodes squared.

Indeed, Metcalfe’s law is often cited as a way of valuing the price of Bitcoin, Ethereum, etc. Out of all moats this is the most important one. Network effects take a long time to establish and are very difficult to erode once established. The creator of Metcalfe’s law, Robert Metcalfe, understood this and when he founded his telecommunications company 3COM he persuaded DEC, Intel, and Xerox to adopt Ethernet as a standard protocol. As Ethernet captured more and more market share competing protocols withered away. A range of Ethernet compatible products emerged which compounded the value of Ethernet and decreased the value of its competitors. Ethernet’s network became so entrenched that it is still nearly ubiquitous today.

The same process could happen with cryptocurrencies. A dreadfully simple statement: a medium of exchange (MoE) is valuable in so far as it can be exchanged for things. Due to this, the first broadly used Stablecoin won’t be adopted because it has the least volatility or is cleverly designed (though some degree of these are prerequisites). It will be because of widespread merchant adoption.

Another example of network effects at play would be 0x, a protocol for decentralized exchange of ERC20 tokens in a permissionless and open way. By using the 0x protocol you can seamlessly exchange tokens with other dApps, exchanges, etc that are using the 0x protocol. There are clear network effects at play here. Fragmented patches of liquidity are connected together to create one pool. The shared benefit that each party gains from using this protocol grows as more adopt it. It is still early days for decentralized exchanges but 0x already has an impressive list of adopters, including 16 dApps and 14 relayers.

What remains to be seen is how sticky developers are to particular networks. Ethereum has a huge head start but there are aggressive actions being taken by other cryptocurrencies to steal from its developer base. If they can successfully erode Ethereum’s position it would be a huge deal and decrease Ethereum’s value significantly. Also on the horizon are the effects of interoperability, a topic too large to broach here, but here are two articles I recommend.

Tony Sheng’s Doubts about interoperable smart contracts

Kyle Salamani’s Smart Contract Network Fallacy

Good governance

What good governance entails is so elusive at this stage that I almost didn’t include this. Fred Ehrsam opens his article Blockchain Governance: Programming Our Future with the following quote:

As with organisms, the most successful blockchains will be those that can best adapt to their environments. Assuming these systems need to evolve to survive, initial design is important, but over a long enough timeline, the mechanisms for change are most important.

It is important that blockchains adapt as the world changes, innovations get diffused, and consumers change their preferences. Even stores of value, like Bitcoin, occasionally need to change. The mechanisms for bringing about that change are important. What’s difficult is that there isn’t one set of mechanisms that should govern all blockchains. Even if we drill down to a specific use case, like a prediction market or stablecoin for example, there isn’t a single best mechanism that fits any particular use case either. Instead, we need to think critically about each use case, what its value proposition is, and what the appropriate mechanisms to match that value proposition should be.

In the infancy stages of developing their blockchains it is appropriate for a centralized team to control a project entirely. After these teams roll out main nets and as their blockchain networks grow, it will be a serious challenge for many teams to establish robust governance mechanisms. I fully expect that many won’t want to relax the iron grip they have on their networks. More than just that, robust governance also means fostering a diverse community of stakeholders and figuring out of what the best ways to establish consensus between them are. A lot of this has to come organically, which is precisely why it will be difficult for teams to facilitate this process.

Lastly, I think it is worth touching on the rents that are extracted from a platform as this is a form of governance. If rents, in the form of fees or excessively large token allocations, are too high then a fork is likely. I say too high because I think that there is an appropriate level to justify developers staying onboard. An example of this in action is Zcash (ZEC) and ZClassic (ZCL). ZEC has something called the “founder’s reward,” whereby 10% of all ZEC minted is gradually distributed to the founders, investors, employees, and advisors of the Zcash Company. These folks are largely responsible for driving the development of Zcash, yet calls quickly came to do away with this “genius tax.”

With this Zclassic was born; it was a fork of Zcash that was identical in all ways except two: it lacked the founder’s reward and slow start Zcash had. Put another way, a competitor created nearly the same currency except it removed a rent seeking mechanism. Despite this, Zcash has always dominated Zclassic in terms of marketcap and almost always in terms of ROI as well. The market seems to think that the founder’s reward is justified for now at least.

Rents need to be priced such that the marginal benefit of continuing development by the rent-seekers is greater than the cost of the rents. High rents may be sustainable for a time period in order to incentivize developers, but after a certain point their cost will exceed their marginal benefit. Moreover, competition in cryptocurrencies doesn’t have to be inherently different from competition in other industries. High profits (rents) in an industry (cryptocurrency) invite competition (forks/new protocols) which in turn lowers the average margin (rent).

If you enjoyed my article then you can get it in your inbox here. You can also follow me on here on Medium or on Twitter. I appreciate feedback!

Crypto Moats was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Earn Crypto Interest with Celsius Network

2018-05-16 17:22:52

CryptoDisrupted Episode 19: An interview with Alex Mashinsky, the CEO and founder of Celsius Network

In this episode I interview Alex Mashinsky, founder of Celsius Network, Venture investors, and serial entreperuner. In the interview we discuss Alex’s extensive entrepreneurial background, the 2008 market crash, how the banking system works, and why he’s working on the Celsius Network.

Also available on iTunes.

For more episodes of Crypto Disrupted subscribe on YouTube, or subscribe and listen on iTunes.

Earn Crypto Interest with Celsius Network was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Boosting Algorithms: AdaBoost, Gradient Boosting and XGBoost

2018-05-16 17:17:51

Neural networks and Genetic algorithms are our naive approach to imitate nature. They work well for a class of problems but they do have various hurdles such as overfitting, local minima, vanishing gradient and much more. There is another set of algorithms that do not get much recognition(in my opinion) compared to others and they are boosting algorithms.

What is Boosting?

Boosting is a method of converting a set of weak learners into strong learners. Suppose we have a binary classification task. A weak learner has an error rate that is slightly lesser than 0.5 in classifying the object, i.e the weak learner is slightly better than deciding from a coin toss. A strong learner has an error rate closer to 0. To convert a weak learner into strong learner, we take a family of weak learners, combine them and vote. This turns this family of weak learners into strong learners.

The idea here is that the family of weak learners should have a minimum correlation between them.

Here, let A, B and C be different classifiers. Their area A represents where the classifier A misclassifies(goes wrong) and area B represents where the classifier B misclassifies and area C represents where the classifier C misclassifies. Since there is no correlation between the errors of each classifier, combining them and using a technique of democratic voting to classify each object, this family of classifiers will never go wrong. I guess this would’ve provided a basic understanding of boosting. Moving on to types of boosting algorithm.

Types of boosting algorithms:

I would like to explain different boosting algorithms without any of the math involved in it cause I feel it would complicate things and defeat the purpose of this article, which is simplicity(hopefully). The different types of boosting algorithms are:

  • AdaBoost
  • Gradient Boosting
  • XGBoost

These three algorithms have gained huge popularity, especially XGBoost, which has been responsible for winning many data science competitions.

AdaBoost(Adaptive Boosting):

The Adaptive Boosting technique was formulated by Yoav Freund and Robert Schapire, who won the Gödel Prize for their work. AdaBoost works on improving the areas where the base learner fails. The base learner is a machine learning algorithm which is a weak learner and upon which the boosting method is applied to turn it into a strong learner. Any machine learning algorithm that accept weights on training data can be used as a base learner. In the example taken below, Decision stumps are used as the base learner.

We take the training data and randomly sample points from this data and apply decision stump algorithm to classify the points. After classifying the sampled points we fit the decision tree stump to the complete training data. This process iteratively happens until the complete training data fits without any error or until a specified maximum number of estimators.

After sampling from training data and applying the decision stump, the model fits as showcased below.

Decision Stump 1

We can observe that three of the positive samples are misclassified as negative. Therefore, we exaggerate the weights of these misclassified samples so that that they have a better chance of being selected when sampled again.

Decision Stump 2

When data is sampled next time, the decision stump 2 is combined with decision stump 1 to fit the training data. Therefore we have a miniature ensemble here trying to fit the data perfectly. This miniature ensemble of two decision stumps misclassifies three negative samples as positive. Therefore, we exaggerate the weights of these misclassified samples so that that they have a better chance of being selected when sampled again.

Decision Stump 3

The previously misclassified samples are chosen and decision stump 3 is applied to fit the training data. We can find that two positive samples are classified as negative and one negative sample is classified as positive. Then the ensemble of three decision stumps(1, 2 and 3) are used to fit the complete training data. When this ensemble of three decision stumps are used the model fits the training data perfectly.

Ensemble of 3 Decision Stumps

The drawback of AdaBoost is that it is easily defeated by noisy data, the efficiency of the algorithm is highly affected by outliers as the algorithm tries to fit every point perfectly. You might be wondering since the algorithm tries to fit every point, doesn’t it overfit? No, it does not. The answer has been found out through experimental results, there has been speculations but no concrete reasoning available.


# AdaBoost Algorithm
from sklearn.ensemble import AdaBoostClassifier
clf = AdaBoostClassifier()
# n_estimators = 50 (default value)
# base_estimator = DecisionTreeClassifier (default value),y_train)

Continuing to explain Gradient Boosting and XGBoost will further increase the length of this already pretty long article. Therefore I have decided to write them as another article. Please follow the link below.



Boosting Algorithms: AdaBoost, Gradient Boosting and XGBoost was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Gradient Boosting and XGBoost

2018-05-16 17:17:32

Starting from where we ended, let’s continue on discussing different boosting algorithm. If you have not read the previous article which explains boosting and AdaBoost, please have a look.


Gradient Boosting:

Moving on, let’s have a look another boosting algorithm, gradient boosting. Gradient Boosting is also a boosting algorithm(Duh!), hence it also tries to create a strong learner from an ensemble of weak learners. This is algorithm is similar to Adaptive Boosting(AdaBoost) but differs from it on certain aspects. In this method we try to visualise the boosting problem as an optimisation problem, i.e we take up a loss function and try to optimise it. This idea was first developed by Leo Breiman.

How is Gradient Boosting interpreted as an optimisation problem?

We take up a weak learner(in previous case it was decision stump) and at each step, we add another weak learner to increase the performance and build a strong learner. This reduces the loss of the loss function. We iteratively add each model and compute the loss. The loss represents the error residuals(the difference between actual value and predicted value) and using this loss value the predictions are updated to minimise the residuals.

Let us break it down step by step.

In the first iteration, we take a simple model and try to fit the complete data. You can from the above image that the prediction values of the model of the ground truth are different. The error residuals are plotted on the right side of the image. The loss function is trying to reduce these error residuals by adding more weak learners. The new weak learners are added to concentrate on the areas where the existing learners are performing poorly.

After three iterations, you can observe that model is able to fit the data better. This process is iteratively carried out until the residuals are zero.

After 20 iterations, the model almost fits the data exactly and the residuals drop to zero.

# Gradient Boosting 
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier()
# n_estimators = 100 (default)
# loss function = deviance(default) used in Logistic Regression,y_train)

XGBoost(Extreme Gradient Boosting):

XGBoost has taken data science competition by storm. XGBoost seems to be a part of an ensemble of classifiers/predictors which are used to win data science competitions. Why is this so? why is XGBoost so powerful ?

XGBoost is similar to gradient boosting algorithm but it has a few tricks up its sleeve which makes it stand out from the rest.

Features of XGBoost are:

  • Clever Penalisation of Trees
  • A Proportional shrinking of leaf nodes
  • Newton Boosting
  • Extra Randomisation Parameter

In XGBoost the trees can have a varying number of terminal nodes and left weights of the trees that are calculated with less evidence is shrunk more heavily. Newton Boosting uses Newton-Raphson method of approximations which provides a direct route to the minima than gradient descent. The extra randomisation parameter can be used to reduce the correlation between the trees, as seen in the previous article, the lesser the correlation among classifiers, the better our ensemble of classifiers will turn out. Generally, XGBoost is faster than gradient boosting but gradient boosting has a wide range of application

# XGBoost 
from xgboost import XGBClassifier
clf = XGBClassifier()
# n_estimators = 100 (default)
# max_depth = 3 (default),y_train)


These tree boosting algorithms have gained huge popularity and are present in the repertoire of almost all kagglers. I hope these two-part articles would’ve given you some basic understanding of the three algorithms


Gradient Boosting from scratch

Gradient Boosting and XGBoost was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Beginner’s Guide to deploying your Blockchain in IBM Bluemix

2018-05-16 17:13:11


IBM Bluemix is now IBM cloud. After the rebrand IBM Cloud not only provides Platform As A Service, but also they provide Infrastructure As A Service, along with Software As A Service. As a flexible SaaS platform, IBM cloud delivers all the necessary services for running a Blockchain in the cloud without breaking a sweat.

The platform is designed to accelerate the development, governance, and operation of a multi-institution business network.

The Blockchain service provided by IBM provides easy maintainability and deployment for developers via a clean interface to control most of the operations. Operations like activating the network, generating a smart-contract, checking the logs and restarting the network are just one click ahead. That’s wow.


To start using the Blockchain service from IBM, just register and go to the catalog to see the number of services they provide.

In the Blockchain service, create a instance with the “Starter Pack” option. It is free for 30days and provides the same unlimited service as the paid one. You can get yourself comfortable with the UI and application in about a week.


After launching the service, you can see the welcome screen with a “Create Network”button. Click on it.

Now you can see a wizard window asking the basic details of your network. Fill in the information and proceed to the next step. The next step involves inviting people into your network. You might have to provide their organization name and their email to send an invite.

As the last step, confirm the informations your provided and click on DONE.

P.S: These are the steps during the time of writing. The information asked in the wizard might be updated in the future.


Adding Peers

To have an active blockchain network, we need peers to be added to the organization. IBM’s Blockchain service beautifully starts the process by asking you to Add Peers. This is the perfectly designed workflow. You need the enter the details of the peers and proceed to the next screen.

Now you can see the peers with the status information and also the options to stop, restart or see the logs of each peer.

Creating the Channel

The wizard then proceeds by asking to Create A Channel. This is a 3-step process that includes, 1. Entering the Channel Information 2. Inviting the users to join the Channel 3. Defining the policy for Channel access

As a part of the invite, the users get a channel request. In order to proceed to next step of starting the network, the channel’s policy needs to be satisfied. i.e, a threshold number of users should accept the request.

Joining Peers to the Channel

Keeping up the pace, as soon as the channel is created, the wizard asks the user to select the peers to join the channel. After joining, you can see that the channel has been created.


Installing and instantiating chaincode is never an easy task. But IBM has made it easy with a few clicks. Click on the Chaincode tab, choose a Peer and click on Install Chaincode. This will navigate you to the wizard that asks for basic information about the chaincode and an upload button to upload your chaincode. This will also automatically identify whether the Chaincode is written in Golang or NodeJS. This is a beautiful automation provided with the service.

As a next step, Instantiation is done simply by passing the args that one usually initializes during the init() function.

Choose a Channel. Click on submit. And it’s done!

The time it takes to deploy a blockchain is not more than 5 to 10 minutes. That’s a a huge reduction. In fact, IBM has also made about 80% of the deployment strategy accessible with just a few clicks. All this sounds pretty futuristic, but you need to use it to believe it!

Beginner’s Guide to deploying your Blockchain in IBM Bluemix was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Blockchain for the masses: A peek into the not-so-distant, decentralized future

2018-05-16 17:12:40

Every technology first serves the researchers and the nerds, then the businessmen and finally the masses. When it starts serving the masses then we call it a mature technology.

Image Credits: TheDigitalArtist, Pixabay
“A computer on every desk and in every home.”
– Bill Gates, 1980

A few decades ago, Bill Gates envisioned a computer in every home. As we look back, we have come a long way and now, on an average, we have more than one computer in many homes. In addition to the computer, in most of these homes, we have smart phones, tablets, smart appliances, etc. and, technically, we have a (small) computer in each of these devices. Hence, more than a computer in (almost) every home.

Most of the time, these devices are inter-connected and also connected to the internet. With network attached storage and/or cloud storage these devices can communicate and share data with each other. We are already seeing trends with people (read power users) adding routers and network storage drives to connect all devices in the home.

Then there are the AI and speech recognition based smart home assistants helping us with various everyday things. For example, each morning alexa[tries to] wake me up, plays me music, updates me on weather, traffic and cricket matches around the world.

All these technologies — computers, the internet, AI, human computer interaction, etc. started as research, then became popular in enterprises and finally reached the masses in form of the devices mentioned above. Similarly, blockchain, today is mostly being used (or should I say tried?) by academic institutions and small and big enterprises. It is yet to reach the masses.

Here’s an excerpt from Andrew Keys’ (ConsenSys) interview where he talks about the current state of blockchain technology,

My thesis is that we’re in ’93 of ’96 — of the next generation of the internet — where in 1996, you were able to work in a permissionless setting. Until then, it was all intranets. — Andrew Keys, ConsenSys

As Andrew rightly examines, we still need a few years for blockchain to be ready for widespread usage. I believe, that’s when the enterprises will start relying on blockchain. We will see some real world problems being solved by blockchain without the need of centralized backup (plan B) systems. This first wave of production deployment of blockchain could be more of a B2B setting. After that, I think, we may need a few more years when blockchain will be used for a B2C setting and will directly reach the masses.

Apart from the readiness in terms of scale, performance and stability, there is another dimension to technology readiness for it to reach the masses — usability. Today blockchain looks like a complicated technology with all the math and cryptography coming into picture and making it look like rocket science to a layman. But this was exactly the case with computers, internet and AI when they were in their infancy. For example, back in the 80’s, to connect to a different computer on a network we had to manually save it’s address into a file on our computer. Today, well, as we know, things a way more simpler and different. When a computer connects to the network, it has to go through a whole process to get an identity (IP address) on the network. Talking about cryptography, today when we connect to any website over https, the same concepts of cryptography come into play which are used in blockchain. It’s just that the advanced web browsers and servers of today have abstracted away the details. Similarly, in a few years time, blockchain will start looking simpler and easy because we would have built the tools to abstract away the complicated details.

Now that we’ve looked at how some of the prominent technologies have advanced in the past and the current state of blockchain, I will now try to paint a picture of the near future when blockchain will start reaching the masses.

To paint this picture of the future, let’s start with the present. Today, as mentioned above, we have several devices at our home and they are all connected. Generally, we have a router which connects all these devices to each other and to the internet using WiFi (or LAN). Now, let’s bring in a blockchain node in the picture. Let’s also connect this node to the router.

Image Credits: HelpNetSecurity

Just like we have other devices and appliances directly connecting to the router and hence the network, soon we should see blockchain nodes, packaged as ready-to-use, plug and play devices. There will not be a need of setting up a node yourself but there will be pre-packaged, pre-configured nodes, ready to connect to a chain and a network.

Let’s also look at how this pre-packaged node might look like. Today when we need to setup an Ethereum node, we need to download one of the Ethereum clients on a computer, setup our wallets and accounts and then run the client with a command line or a basic GUI. To interact with this node we need to connect it with one or more dApps. Tomorrow, in the near future, here’s how the process might look like —

  1. Go to an electronics store (physically or online)
  2. Browse the pre-packaged blockchain nodes available in different configurations.
  3. Purchase one and bring/get it home.
  4. Unpack and connect it to power and network.
  5. Set your identities and configure you network parameters.
  6. Connect this node with dApps on your phone and laptop (just like your phone connects to a network, it will pair/connect with this node)

After reading these steps, if you are also getting a feeling of deja-vu, then that’s natural. Because this is how we get new smart phones, smart TVs, and other smart appliances today. Why not blockchain nodes, tomorrow?

Just like the PCs of today are pre-configured with software licenses and OEM specific drivers, these pre-packaged blockchain nodes will come with built in hardware wallets and network configurations. Imagining a step further, these blockchain nodes will also have support for cloud subscriptions so that only the required state of blockchain stays on the hardware at home and rest of it is backed up on the cloud — hence saving bandwidth and electricity.

Today, we can install and run multiple blockchain clients side-by-side on the same hardware. Similarly, in these pre-packaged blockchain nodes, there will be options to connect to multiple networks. Some of the obvious/pre-configured choices might be connecting to your preferred payments network, your preferred identity network, decentralized social networks, decentralized market places, etc.

Now that we have a blockchain node at home, connected and syncing with one or more networks, let’s look at how it will be used.

In the tweet embedded above, Trent mentions two phases of the blockchain movement and the later phase is about optimization (of civilization). A contributor to this optimization phase can be the setup of the household node which can help us optimize almost everything related to households — services, utilities, payments, communications, etc.

Payments and services

All payments to all utilities will be done using this blockchain node at home. We will not have to connect to several websites and apps to make payments for electricity, internet, water, heating, etc. etc. This will spare us from entering sensitive information at several centralized systems and middlemen (payment gateways).

The electricity provider will simply provide an address and a subscription id which will be used to send payments. In the next level, the smart electricity meter at home will be directly connected to this blockchain node to make automated yet secure payments. Similarly the smart router will be able to pay for the internet subscription. The work on tracking usage and payment for utilities using blockchain has already begun in many countries.


All important public communications will be done by broadcasting on or using the blockchain networks. The blockchain node will be replacing the radio and TVs by making communications more secure and authentic, putting a lid on fake news.

To generalize, a lot of conflict resolution will happen on the blockchain or I should say conflict resolution will not be needed because of blockchain. The household node will be instrumental in making sure all payments and communications are tamper proof right from the origin.

Identity and Authentication

Because this blockchain node is connected to the local router (remember), it is also connected to other devices at home — tablets, phones, etc. Using multi-party compute the signing keys for the blockchain node will be shared on one of these devices and as soon as you will do a transaction using the blockchain node, you will also get a notification on your phone to approve the transaction.

IoT and Smart Appliances

Through the router, the blockchain node will also be connected to all smart appliances at home. The awesome IoT scenario of coffee machine ordering coffee and milk will finally become much more secure because the identity of the coffee machine will be registered on the blockchain along with your identity and you both (you and your coffee machine) will be multi-signing transactions for overall integrity. Washing machines will be able to pay for detergents.

Network of blockchains

To serve all these use-cases one network might not be sufficient. There will be a network of blockchains just like internet is a network of networks. To make sure that different chains serving various use-cases are able to communicate with each other, there will be new standards developed to support inter-chain communications. The household blockchain node will be able to support these standards and clients for multiple chains and all the complicated details will be abstracted away from us using the node. This will be exactly like how we use the internet today and we have standards like HTTP, TCP/IP, WWW, JSON, etc. to make the networks and services communicate with each other.


To make all of the above and much more happen, we need to solve the current challenges of decentralized systems. A technology becomes mature when its interfaces become seamless and it’s usage become easy. Today, as we know, the blockchain interfaces are complex and hence the usability is low. Apart from these, there are challenges with scalability and privacy. These challenges are very well known to the community and to the torch-bearers and there is plenty of work going on to solve them.


This was just a stream of imagination about how the blockchain future might look like for the common people. There can a thousand different variations of this depending on how mature our knowledge about blockchain is and what we want to solve with it. Some crypto experts, after reading this, might raise questions about the feasibility of some of the things mentioned above. The more important thing, I believe, is that it’s very important for us, working in the blockchain industry, to make this technology usable by the masses — just like our seniors made the computers, the internet, and AI usable for all of us and our non-technical family members.

This article was first published on

Blockchain for the masses: A peek into the not-so-distant, decentralized future was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Golden ratio in layout design

2018-05-16 17:12:20

How to create software layout pleasant for the eye of the user? Use fibonacci sequence and combine maths and art for the pixel perfect result.

There’s a common mathematical ratio which is common in nature that can be used to create pleasing, natural looking compositions in your design work. It is called the Golden Ratio, Golden Mean, Fibonacci Spiral or Fibonacci Sequence.

The Golden Ratio or Fibonacci spiral

This kind of composition is widely used in art and photography. But first and foremost, this ratio is a common pattern found in nature. E.g. pinecone shape, sunflower seeds, weather patterns, etc.

Golden ration in photography composition

Designing a piece of software has many aspects to be considered. There are a ton of books and articles related to the best practices in creating a stunning user experience. One aspect is solely devoted to the visual composition. The software has to have a pleasant visual appeal to it. If it is pleasant for the eye, more likely is that the users will subconsciously get to like it. More important, if the composition is right, the eye will naturally rest on the things that are important.

If the composition is right, the eye will naturally rest on the things that are important

There is a lot we can do to subconsciously draw attention to our call to action or any other part of the software. I like to use the Golden Ration in the layout composition because it naturally emphasise every other UX technique. Place the Fibonacci spiral on top of your screen, and start the layout plan. Place the most important information in the center of the spiral. These components in the center will grab users attention at the first glance. This principle works great with the Gutenberg Diagram presented below.

The Gutenberg Diagram describes a general pattern our eyes follow when interacting with our application or content. The pattern suggests our eyes will sweep across and down the page in a series of horizontal movements. Each sweep starts a little further from the left edge and moves a little closer to the right edge. The overall movement is for the eye to travel from the primary area to the terminal area. This path is referred to as ‘Reading Gravity’.

Gutenberg diagram in combination with the Fibonacci spiral

Important elements should be placed along the reading gravity’s path. The most important information should be placed in Primary Optical Area. As you can see, primary optical area is placed right at the center of the Fibonacci spiral, which makes it a perfect place for the user to notice the area. Strong Follow Area should contain useful data as well so that the user maintains focus. The terminal area is usually reserved for a CTA — where attention is focused.

If you take this approach, it will improve your design by a percent, or even less. But details are important. This percent might just add enough value to your product so that the user love it. There is a thin line between meh and wow reaction so you would want to use as many tricks as possible, including this tiny one. Cheers!

Golden ratio in layout design was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Google I/O 2018: All you need to know about the key announcements

2018-05-16 17:09:38

© Google

Google has hit headlines again and this time, it’s the major updates and new tech unveiled by the company at the famous Google I/O 2018 developer conference at Shoreline Amphitheatre in Mountain View, California.

Coming up with new technologies and announcements is always a surprise from Google for its millions of customers worldwide. Just a year ago, major advancements were made on AI assistants and hardware, and this year, things have gone much better.

Let’s quickly wrap up the things that we saw at the action-packed I/O conference

Android P Beta released


Google has introduced the Developer Preview of Android P earlier, this year we saw the new beta version of OS that will be available to the selected range of devices. Now what the word P stands now, we may find later this year.

So what all comes with Android P?

Google has totally refurbished the navigation option. The swipe up feature goes down to the small home button at the bottom which can also be used to access recent apps. Upon pressing again, there’s a horizontal bar, which you can keep scrolling to access the apps.

Source : Google

In the new navigation bar, there is no overview icon or Clear All setting. But as per tweet by VP of Engineering, David Burke, the feature will be available in upcoming updates. Now you can also customize the navigation bar by changing colours, background and even adding custom widgets.

Power packed Android P

Its been long awaited requirement for many users to track their mobile usage and utilise their time. Now with Dashboard feature, you can easily check for how long you have using the device, certain apps or surfing the internet.

On Apps like Youtube, you can even get a notification when to take a break from your phone. With the new Wind down mode, you can change the display to grayscale while using at night.

© Google

The notifications will now be more useful in terms of providing value. Now you don’t need to open any specific to perform any certain activity.

Based upon notification, Google will suggest replies like booking cab, adding text and images etc with the help of “Reply” which is basically a smart reply aims at quickly responding to messages without actually typing them.

© Google

The adaptive brightness feature allows the user to set the brightness of the screen as per their needs depending on the light conditions.

Google has focussed on better battery management allowing the user to save battery life on those apps that are used rarely. Having partnered with Deep Minds, Google has prioritized on utilizing battery life more on that apps and services used most by user

© Google

Smarter Google Assistant

Google Assistant now comes with six new voices including voices of renowned singer John Legend to make the conversation sound more like human and less like the robot.

Also now don’t need to say “OK Google” every time to interact with the assistant as it breaks the flow of continuous conversation.

You just need to say at first to initiate the process and then you can have the normal conversation with the assistant. In addition, you can multiple questions or make request simultaneously.

Google Duplex

One of the very cool features is now assistant can book appointments for you. Yes, you heard that right. Now suppose you want to book an appointment to get a haircut at the barber, you don’t actually need to make the call.

The assistant will make the call to the barber and book your slot and the and conversation seems much like a normal conversation between two humans.

Google photos

Now Google photos will be coming with AI-powered tools for creating collages and movies.

© Google

It will also be able to determine colourization, brightness, rotation and adding colours. Moreover, it will also determine photos of your document and convert them into pdf format.

Gmail’s Smart Compose

Few days back, a newly redesigned Gmail was launched by Google. The new updates aim to improve the productivity of the users. For those who keep on writing the same kind of email in a day, now Gmail will be smart composing them on your behalf.

Source : Digital Trends

Now whenever you write an email, Gmail will be giving you suggestions like words or phrases and just need to hit the tab to use the suggestion. The new feature is expected to arrive within a week for personal Gmail users.

“Pretty Please” Google

Like Amazon dots kid edition, Google is looking to come up with feature aim to teach manners to your children at home.

Google Assistant can force your kid to say please and keeps the mic hot

To talk to assistant politely, kids need to tell thank you and please to the assistant just like with any other human being.

Google Smart News

Google is now also putting emphasis on providing the most relevant new content to the users with the help of AI.

It is redesigning the interface and will present you with the news that Google thinks is important for you in the “For You” tab based on your past searches.

© Google

It is also offering a Full Coverage section that will allow you to see stories covered from multiple sources. You can also now subscribe to the magazine via app making the subscription process easier.

Personalized Google Map

© Google

To provide better user recommendation to the user about local places, Google is now bringing the power of AI to the maps.

Now, wherever you are standing, you will not just see the direction where you will be heading but a full street view will be shown using the camera. The personalized For you tab will now be showing you restaurants, amusement parks and other nearby places that you might be interested in.

© Google

In addition, by pointing your camera to any building or text, Google Lens will be easily identifying and show details about that.

© Google

Let us know in comments what you think of the new products and updates announced by Google and how much importance AI will be playing in future in our real life.

Google I/O 2018: All you need to know about the key announcements was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Building a better file uploader for the web

2018-05-16 17:08:55

Introducing Uppload, a JavaScript uploading widget

The input element was introduced in the first HTML+ discussion document in July 1993 for entering data in forms on the web. Dave Raggett’s iconic HTML 3.2 Reference Specification in January 1997 included <input type=file>from the 1995 memo RFC1867 written by Xerox scientists Larry Masinter and Ernesto Nebel that described HTML forms with file submissions:

Currently, HTML forms allow the producer of the form to request information from the user reading the form. These forms have proven useful in a wide variety of applications in which input from the user is necessary. However, this capability is limited because HTML forms don’t provide a way to ask the user to submit files of data. Service providers who need to get files from the user have had to implement custom user applications.

The MIME type multipart/form-data and the accept attribute — things that we still use to upload files today — were both described in that same paper over 20 years ago. Native file uploading is incredibly elegant and very easy to use. We’re all used to the following syntax:

<form method="post" enctype="multipart/form-data">
<input type="file" accept="image/png" name="myFile">
<button type="submit">Upload</button>
if (isset($_POST["myFile"]) {
move_uploaded_file($_FILES["myFile"]["tmp_name"], "./file");

Today, however, we have the File API with FileReader, FileList, Blob, and more, which make it very easy to manipulate files on the client and then upload them to the server.

They’re not supported by every browser, but when you combine it with new functionalities like DataTransfer, which lets you drag and drop files right in the browser and MediaStream, which lets you access the user’s camera and microphone, we can create an exceptional user experience.

The problem is that building custom file uploading components is hard and time-consuming, especially when you consider maintaining compatibility for browsers that don’t support these new APIs.

The current landscape

Left to right: The file uploading widget of Filestack, Uploadcare, and Cloudinary

There are many services that offer file uploading widgets, but they’re all into the business of file storage and content delivery.

Uploadcare, for example, lets you upload up to 500 MB of files for free and their next plan is 7.5 GB storage for $25/month. In comparison, Google’s Firebase gives you 5GB of storage for free and then $0.026/GB, it comes out to be over 128x cheaper. Amazon Web Services’ S3 storage is a little more expensive at $0.0390/GB but that’s still over 85x cheaper.

Cloudinary gives you 10GB of free storage, which is comparatively better, until you realize that their next plan starts at $99 billed monthly. Filestack also gives you half a gig for free and starts billing $49/month for 25 GB.

The solution

An open-source file uploading widget…

Cloudinary, Filestack and others offer their own proprietary JavaScript widgets — which might be good for business, but doesn’t promote community contributions and are therefore lacking and feel dated. Cloudinary especially looks like it was made for the Web 2.0 era. What we want is an MIT-licensed, completely free and open source solution that encourages extension.

…which works with any backend…

Uploadcare does open source their widget under the BSD 2-clause license, but it only works with their backend services. It calls their APIs, uses <iframe>s for services like Instagram and Import from URL, and only lets you upload to your Uploadcare account (you can connect an external storage in premium plans.) What we want is a widget that is completely backend-agnostic; it should work with any server that can handle HTML form uploads and should let you upload to any third-party managed service like S3 or Firebase.

…and allows modular services…

All three widgets support drag-and-drop uploads and importing from URLs. Uploadcare is the only one that supports clicking a picture from the user’s webcam (something that’s extremely handy for quick profile pictures, especially on mobile devices) and the others integrate with Google Image Search and let you find images. I cannot comment on the legality of uploading unlicensed photos, but it’s interesting to know that it’s there.

These services are very handy, but require constant updates and larger bundle sizes when new features have to be added:

You can’t run around and add a button to these things. They’re already shipped. So what do you do? — Steve Jobs, iPhone introduction in 2007

What we want is a completely modular approach that lets developers create their own bundles based on the services they’re interested in adding, and dynamic loading of those services when serving assets from a CDN. This would create small bundle sizes for webapps — check drag-and-drop, camera, and crop, and your bundle is created. Similarly, when using the CDN, only users who use the webcam or crop their images or import from Instagram will have to load the code for those services.

This also helps developers very easily build their own services and modules to extend the scope of the widget’s functionality. A developer wanting to integrate Google Photos or YouTube videos can very easily write a module and dynamically load it through the uploader. It just works.

…with graceful degradation

The new File API results in great UX — file previews, drag-and-drop, and easy Blob and Data URI manipulation. Unfortunately, these services aren’t supported by older browsers (released over 5 years ago) even with polyfills. This doesn’t mean that the widget should stop working, rather it should fallback to a simple HTML file input.

There is no workaround for drag‘n’drop in older browsers — it simply isn’t supported. The same goes for image previews, etc…users using an old browser will [still] be able to upload files. It just won’t look and feel great. But hey, that’s their fault. — Matias Meno for Dropzone

Our widget should work very well right out-of-the-box for the majority of users, but should still work well-enough for everyone else. It should definitely not not work for anyone.

Say hello to Uppload

Uppload is free and open-source, modular, works with any backend, and cross-platform. It works for mobiles, desktops and even Internet Explorer 11. It works with regular HTML file upload backends with no extra configuration required, and has built-in extensibility support that makes using Firebase, S3, and other services a piece of cake. It’s dependency-free and also comes with wrappers for Vue.js and React.

We have a tasklist we’re currently working on including a truly modular build system, increased browser support and test coverage, smaller bundles, more modules, and even more wrappers.

📷 Try Uppload ➡ · GitHub · NPM

By Michael. Uppload is an open-source file uploader with a hamster as a logo 🐹

Building a better file uploader for the web was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

How to Stay Blissfully Positive at Work

2018-05-16 17:08:29

I am sure everyone here might be struggling hard with keeping a positive mind. With every morning you hitting snooze over and over again, dreading going to work each day, and you feel like being at work feels like the worst thing in the world.

Depressing, isn’t it?

If this is you, then, you are certainly reaching a breaking point.

I believe we all need to be aware of our unpleasant thoughts and what exactly influences them. I personally think getting through the negativity is not easy and we need to find ways to combat the negativity with a daily reminder to ourself to not to get affected by our external environment.

I am well aware of the negative communications that occur in a majority of the work settings ― that is highly dangerous to physical and mental well-being. So what to do if you do not want to get attracted to negativity in your life? Get rid of cranky slumps and switch to positivity.

Let’s talk about how to be positive at work to make sure negativity doesn’t impact your life.

How to detox negativity at work

Start the day right — Dress up and show up

“Either you run the day or the day runs you.” Jim Rohn

The early hours of the day will set the tone for the rest of your day — positive or negative. You are either likely to hop out of bed and plan the best for your day or hit snooze and dread for the rest of the day. Look up to people like Bill Gates, Steve Jobs, and Richard Branson and you will never hit the snooze again.

When you wake up in the morning, the serotonin (or happy hormone) level in your brain is at its highest.

Start the day right by mentally organizing yourself, tune into something happy, walk around, eat and drink healthy and also make sure to dress nicely for work. Introduce your body and mind to mindfulness and you can then let go of the negative thoughts. When you start your day right, you will never find your mind wandering away towards negativity.

Have control over your situation — Feed your mind

“The goal is to grow so strong on the inside that nothing on the outside can affect your inner wellness without your conscious permission.”

There is always a perception of every situation that you have to make. You are the sailor and you have to sail the ship in any situation. The attempt to control the situation is within your mind and the power to consciously design your situations lies in positivity. And when you prepare yourself for any situation with a positive energy, you will soon make a good change in your life.

Keep the enthusiasm alive — Feel enthused

“Every man is enthusiastic at times. One man has enthusiasm for 30 minutes, another for 30 days, but it is the man who has it for 30 years that makes a success out of his life.” — Edward B. Butler

It’s all about your attitude. Lack of enthusiasm creates unhappiness and dissatisfaction. For instance, a major league baseball player named Frank Bettger, lacked enthusiasm and was demoted to the minors for the same. Being affected by this decision of his manager, he decided to build a reputation to be known as the most enthusiastic players in the league. And a day came when he successfully landed to what he wanted to and had a commendable raise, which was made possible only when he developed the power of enthusiasm for a better chance for himself.

Similarly, the power of enthusiasm will help you a great deal to keep you positive as you will have the feeling that there are a lot of good things planned than the negativity that is bogging you down. Create something to look forward to with enthusiasm that will trigger some sort of happiness.

Let go off negative people — Negative vibes

“Protect your spirit from contamination. Limit your time with negative people.” Them Davis

Are you having those lower vibrations when you are around people who are negative? These people will always demoralize you from following your dreams and show you the dark side.

The old saying “birds of a feather flock together” simply means either people who are similar naturally find each other or people become alike over time. As the “neurons that fire together wire together”― surround yourself with positive people to develop a positive energy. Allow yourself to connect with more positive people every day!

Do not get grumpy and start acting on how to stay positive at work. Look at everything that comes your way with a positive spin. The more your practice improving yourself in a negative environment, the more you will find your path to the personal growth.

Figure it out, positive people are always more likely get to what they want out of life, and if by any chance they fail to, they never whine about it, instead, they still enjoy their lives.

Share with the tribe how do you clear the negative energy around you?

Author Bio:

Vartika Kashyap is the Marketing Manager at ProofHub and has been one of the LinkedIn Top Voices in 2017. Her articles are inspired by office situations and work-related events. She likes to write about productivity, team building, work culture, leadership, entrepreneurship among others and contributing to a better workplace is what makes her click.

How to Stay Blissfully Positive at Work was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Build on protocols (not necessarily blockchains)

2018-05-16 17:07:12

Standard gauge railway tracks helped trains reach most of the planet

Investing in fat protocols is all the rage in the startup world today. Still investing in protocols is nothing new. Web 2.0 startups came to life thanks to the Web open & standard (🙏) protocol (HTTP, HTML, CSS, AJAX). Before that many networking companies bet their existence on the TCP/IP stack when other put their bets on private non-standard networking protocols that ended up in the trash bin of computer history.

Human decisions are driven by transaction costs. Understanding an outlier or a non-consensual idea or protocol takes a toll on our energy reserves and we are made for energy preservation. Over time, many new ideas flourish and appear to win the race however in the long run people tend to consolidate on some protocols that lower the barrier of communication. The power of these protocols are all around us and extend way beyond bits & bytes.

Humanity has been consolidating over a few protocols over the centuries. Think about how many religions, languages and political systems gave their place to the ones we know today. Some of the first global protocols are 🏛 Latin (language), ✝ Christianity (moral system) and 🥇Gold (store of value) to being replaced by 🇬🇧 English, 💻 Internet, 🏦 Liberalism, and ₿ Bitcoin, in my naive opinion. Other widely used protocols are the standard gauge railway, the 220V electricity network or the RCS messaging system.

The world of business is no stranger when it comes to consolidation. Businesses more than anything else are averse to high transaction costs, uncertainty, and tribal knowledge. This is the foundation on which Open Source came to dominate the modern enterprise. Wide collaboration, vendor lock-in avoidance, permissionless innovation is just some of the core benefits of open source systems as protocols for information processing.

But it’s not only the code that makes a protocol. Actually, we should look way back on the primitives of a new language/protocol or process. How a protocol is formed? At a minimum, a set of various stakeholders agree that there is a problem and try to build a common vocabulary to define the problem and its proposed solution. Various neutral organizations are set up to manage the complexity of reaching consensus on widely supported proposals that will become the new protocol. Most probably you have heard of W3C, the Apache Foundation, the Linux Foundation, the Cloud Native Foundation etc.

At Marathon Venture Capital we believe that there lie immense opportunities for innovation and value creation. When competing organizations agree upon a definition of a problem then there are two things you should take for granted: the problem is huge and a solution/product is badly needed.

In the enterprise space, a lot of companies have been built on top of open standards and protocols such as the Web, OAuth, OpenID, ANSI SQL, several Apache projects like Hadoop, Kafka, and Lucene. The amount of innovation that sprouted in such places is astonishing. Below I will highlight some of the incubating protocols and specs that I am really interested in and I believe have great potential. We will need startups to implement them, offer enterprise versions and ride the flywheel as the market adopts the new standards.

  • Apache OpenWisk, a serverless platform that was started by IBM.
  • OpenEvents is a specification for a common, vendor-neutral format for event data.
  • OpenTracing describes vendor-neutral APIs and instrumentation for distributed tracing.
  • OpenCensus is a single distribution of libraries for metrics and distributed tracing with minimal overhead that allows you to export data to multiple backends.
  • Open Container Initiative is a lightweight, open governance structure (project), formed under the auspices of the Linux Foundation, for the express purpose of creating open industry standards around container formats and runtime. Containers won’t be a one size fits all and prime example is balena (a container engine for IoT) by
  • IPLD creates specs for the content-addressed, authenticated, immutable data structures.
  • W3C WebAuthn defines an API enabling the creation and use of strong, attested, scoped, public key-based credentials by web applications, for the purpose of strongly authenticating users. Let’s kill the passwords already.
  • W3C Interledger is an open suite of protocols for connecting ledgers of all types: from digital wallets and national payment systems to blockchains and beyond.

It’s safe to say that not all of these protocols are going to succeed. Still, riding a wave ie. a protocol with fast adoption like OpenTracing is a meaningful bet for your startup. For example, developers are talking about observability in their cloud-native applications and monitoring is built into the new applications versus the existing quo of relying on proprietary solutions offered by vendors like New Relic and Appdynamics. Since the market is shifting to a standard protocol your newly created application monitoring startup should embrace it. Replicating the playbook of the existing vendors is like selling steam engines when factories started installing electricity wires.

If you are starting up today, spend some time exploring existing and incubating protocols or open source projects. Look for growth signals and put your efforts behind something larger than you, hopefully, you will riding the right wave as the industry is shifting and converging over the new gospel.

Build on protocols (not necessarily blockchains) was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Binance listing - a proof of legitimacy & seal of recognition

2018-05-16 17:06:47

Binance listings — proof of legitimacy & seal of recognition

CLOAK was recently listed on Binance & here’s what it did to us.

A bit of background information

I’m an ambassador for the privacy cryptocurrency CLOAK.

And this is our story.

CLOAK began its journey in 2014. The project has quietly stood its ground ever since, even through the crowdfunding frenzy of 2017, staying resiliently focused on its goals and working diligently on its foundation.

CLOAK is a privacy centred payment cryptocurrency that pays 6% interest to its holders, is built on Proof of Stake and has been audited by Cognosec, a respected security firm, for efficiency.

The way CLOAK is designed, makes it environmentally friendlier, more profitable, more secure and more private than many of its competitors.

When I discovered CLOAK in late 2017, I was very impressed with the tight and active community, strong developer team, its purpose and how well designed the project is.

Over the past few months I’ve thrown my weight in support for them through a number of articles and comparative essays. Read more on CLOAK in my other writing here or check out their website. I will leave more links for you in the comments section, too.

CLOAK used the past few years exclusively on building the code and the infrastructure. Now the focus has shifted outward, with more time and resources dedicated to its brand and gaining wider recognition for their hard work — earning the attention of influencers, such as Clif High.

Two important steps in this new phase of CLOAK’s journey are availability and liquidity, which brings us to Binance…

Getting listed on a major exchange

Even though CLOAK has a great product — namely Enigma, its cloaking service that hides all traceability — a great project cannot succeed alone.

CLOAK needs trusted and capable partners to forward its mission of privacy.

Binance currently enjoys considerable prestige in the crypto-verse. It’s a smooth running and efficient exchange, in touch with traders’ needs and meticulous in their listing procedures, earning them the trust of millions of users across the globe.

This gives them command of substantial trading volume as well as advertising power. Many of you may be aware that, as a result, the cost of a listing on Binance is a considerable dent in any project’s budget.

Mind you, CLOAK being an early project means that it did not benefit from the massive rounds of free funding all recent projects enjoy.

It is a testimony to the dedication of the men and women in CLOAK’s team and amongst the early adopters that the project has survived and thrived all the way to 2018 and is now listed on major exchanges, such as Binance.

What can a listing do for a project?

Exchanges are vital partners for cryptocurrencies, as they provide access not only to new markets, but plenty more.


Liquidity is important in the wider adoption and trustability of any cryptocurrency. Binance is the second largest cryptocurrency exchange in the world, specialising in alternative coins and tokens. It’s daily volume reaches a whopping 2 billion US$. The exchange has the kind of track record, that it can be counted on to push the volume of a new listing.

Getting listed on Binance helped CLOAK more than double in valuation. This is a 100+ % increase. Market Cap tripled to 75 million US dollars, half of what it was at the height of the bull run in December 2017.

The new valuation has remained relatively stable ever since.
The rumours that CLOAK is getting listed on Binance circulated as early as April 14 and was officially released by Binance & CLOAK on April 18.

Other pros of a major listing


Knowing that the project has undergone rigorous checks by a reputable exchange grows investor confidence in the project and offers proof of legitimacy to CLOAK.

Binance is known to choose quality projects to list. While many exchanges only ask for money, Binance also checks the project’s background before listing to protect its users, its reputation and its reliability.

When listing a cryptocurrency on a reputable exchange, you can be sure that questions like the following have been addressed:

  • Who are the founders?
  • Can the founders be verified as real people?
  • Are they willing to do KYC on themselves?
  • Do they have recommendations from trusted sources and projects?
  • What other companies have they worked for?
  • How innovative is the project’s technology?
  • Does the project have the potential to achieve its goals?
  • How much experience does the team have in the industry they are trying to disrupt with their project?
  • Does the team have a good track record for delivering products?
  • Do they have reputable and useful partnerships?
  • How big and active is the project’s community?

A large supportive community supports the project’s successful exchange launch

  • How well are the tokens distributed?

For example, if the lion share of tokens is held by the team itself or a handful of wallets, there’s a big chance they will dump their tokens for profits.

Brand recognition.

An unknown project is likely to be treated with caution, but a brand that has been repeated to a trader many times will command a sense of familiarity and erupt curiosity.

A listing on a major exchange such as Binance is also a massive advertising opportunity in addition to the listing itself.

Even if investors do not invest in CLOAK immediately, the listing greatly advances the project’s brand recognition.

A listing on Binance will circulate on the front page of their website, its news channels and social media accounts. Hundreds of thousands of investors keep daily eye on new listings and news from Binance. Binance’s Twitter, its most successful social media channel, reaches 780 thousand followers and its website receives 60+ million hits a month.

Getting vital exposure amongst the social media channels of big exchanges may result in investments in the future.

Rise in rank

Another way to guarantee increasing brand recognition is entering the top 100 of CoinMarketCap, where the majority of investors linger.

The spike in volume that a listing on Binance invites is a massive help in boosting the rise of a cryptocurrency in CoinMarketCap ranking. If a project can penetrate the top 100 as a result, it would be looking at another potential pump to its valuation, legitimacy and brand recognition.

More markets, more possible investors

It’s clear that getting listed on major exchanges carry many benefits that advance the mission of any cryptocurrency.

Another major exchange CLOAK features on is UpBit (KR) that, similar to Binance, boasts a not so modest daily transaction volume of 1 billion US dollars and has access to many Koreans, amongst whom cryptocurrencies are very popular.

Readers will also be familiar with other exchanges CLOAK has built a relationship with, including the former giant Bittrex, which still commands a respectable 200 million dollars daily, Livecoin (RU), BuyUCoin (IND), Cryptopia (NZ), and the DEX (decentralised exchange) OpenLedger.

With these partners backing up CLOAK, you will notice that CLOAK has made sure you have access to its cryptocurrency, where ever in the world you are looking to invest from. As CLOAK grows its legitimacy and recognition in the crypto-verse, you will be hearing more from them!

Thanks to everyone for their continued support of CLOAK and keep an eye out for more great improvements and partnerships they have in store for 2018.

If you enjoyed my article, I’m also on:

Quora / Steemit / MediumTwitter

Full disclosure: Nele Maria Palipea is an advisor for CLOAK, a privacy focused cryptocurrency from 2014.This is not investment advice nor is it an official representation of the project. It’s an opinion piece. So please consult official sources and contact the project for fact verification.

Binance listing - a proof of legitimacy & seal of recognition was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

A simple, real-world VueJS directive

2018-05-16 17:06:14

Photo by frank mckenna on Unsplash

VueJS is “The Progressive JavaScript Framework”. It takes inspiration from all prior art in the view library and frontend framework world, including AngularJS, React, Angular, Ember, Knockout and Polymer. In Vue (and Angular/AngularJS), a directive is a way to wrap functionality that usually applies to DOM elements. The example in Vue’s documentation is a focus directive.

When running VueJS inside of AngularJS, an issue occured whereby the AngularJS router would try to resolve normal anchors’ href on click. The hrefs weren’t AngularJS URLs so it would fall back to the default page. One solution could have leveraged components to update window.location directly, but here’s a nifty directive to do the same:

<a v-href="'/my-url'">Go</a>

That’s a pretty cool API, and it’s probably more idiomatic Vue than:

<MyAnchor href="/my-url">

There were a couple of gotchas:

Local vs global directive registration 🌐

A global Vue directive can be defined like so:

Vue.directive("non-angular-link", {
// directive definition

It can also be defined locally as follows:

Vue.component("my-component", {
directives: {
"non-angular-link": nonAngularLinkDirective

Where nonAngularLinkDirective would be a JavaScript object that defines the directive, eg.

const nonAngularLinkDirective = {
bind(el, binding) {},
unbind(el) {}

This allows for flexibility if using a bundler like webpack and single file components:

// non-angular-link-directive.js
export const nonAngularLinkDirective = {
// directive definition
// MyComponent.vue

import { nonAngularDirective } from './non-angular-link.directive';
export default {
directives: {
'non-angular-link': nonAngularLinkDirective

A minimal directive API 👌

A full MyAnchor single file component would look like the following:

// MyAnchor.vue
<slot />
export default {
props: {
href: {
type: String,
required: true
methods: {
goToUrl(e) {

This is quite verbose and leverages a global DOM object… not ideal. Here’s something similar using a directive:

// non-angular-link-directive.js
export const nonAngularLinkDirective = {
bind(el) {
el.addEventListener("click", event => {

This directive has to be used like so <a href="/my-url" v-non-angular-link>Go</a>, which isn’t the nicest API. By leveraging the second parameter passed to bind we can write it so that it can be used like <a v-href="'/my-url'">Go</a> (for more information about el and binding, see

// non-angular-link-directive.js
export const nonAngularLinkDirective = {
bind(el, binding) {
el.href = binding.value;
el.addEventListener("click", event => {

We can now use it like using a local directive definition:

// MyComponent.vue
<a v-href="'/my-url'">Go</a>
import { nonAngularLinkDirective } from './non-angular-link.directive';
export default {
directives: {
href: nonAngularLinkDirective

Vue directive hooks and removeEventListener 🆓

For the full list of directive hooks, see

As a good practice, the event listener should be removed when it’s not required any more. This can be done in unbind much in the same way as it was added, there’s a catch though, the arguments passed to removeEventListener have to be the same as the ones passed to addEventListener:

// non-angular-link-directive.js
const handleClick = event => {
export const nonAngularLinkDirective = {
bind(el, binding) {
el.href = binding.value;
el.addEventListener("click", handleClick);
unbind(el) {
el.removeEventListener("click", handleClick);

This will now remove the listener when the component where the directive is used is destroyed/un-mounts and leaves us with no hanging listeners.

Handling clicks properly 🖱

An edge case happens when an anchor contains an image: the target of the event is not the anchor, but the img… which doesn’t have a href attribute.

To deal with this, with a little knowledge of how addEventListener calls the passed handler, we can refactor the handleClick function.

// non-angular-link-directive.js
function handleClick(event) {
// The `this` context is the element
// on which the event listener is defined.

// rest stays the same

By using a named functions and the this allows the event listener to bind this to the element on which it’s attached as opposed to the lexical this of an arrow function.

Parting thoughts 📚

We use window.location.assign so as to allow to test easily. With Jest and @vue/test-utils a test at the component level should look like this:

import { shallowMount } from '@vue/test-utils';
import MyComponent from './MyComponent.vue';

test('It should call window.location.assign with the right urls', () => {
// Stub window.location.assign
window.location.assign = jest.fn();

const myComponent = shallowMount(MyComponent);

myComponent.findAll('a').wrappers.forEach((anchor) => {
const mockEvent = {
preventDefault: jest.fn()
anchor.trigger('click', mockEvent);

Directives allow you to contain pieces of code that interact with the DOM. This code needs to be generic enough to be used with the limited information available to a directive.

By coding against the DOM, we leverage the browser APIs instead of re-inventing them.

Originally published at on May 15, 2018.

A simple, real-world VueJS directive was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

What’s wrong with existing crypto games?

2018-05-16 15:48:53

About half a year ago Crypto Kitties was launched and amused everyone. A lot of transactions generated daily turnover of bargains that beat previous top records. During that period we were conducting Lordmancer II ICO and we experienced significant issues regarding Ethereum network due to a huge number of transactions.

It looked like Crypto Kitties was becoming a true hit. The first popular crypto game ever. Then, just in few months a lot of new crypto games were launched. All these games proclaims blockchain stored assets. Crypto games are just about to completely defeat other types of games…

But let’s take a closer look. There are some services that scan player activities in crypto games. For example,

If you open it, you will see a table as shown below.

As you can see, the number of Daily Active Users is about 500 for Crypto Kitties and about 1000 for all crypto games together. Once again, there are only few thousand users who play any kind of crypto games. And this number has not been changed since last December.

1000 users — it may be a daily audience of a very middle level mobile or web game. Why is this number so little?

Well, there are many reasons:

  1. The total number of cryptocurrency holders is estimated as 1.5 mil comparing to the total number of gamers which is about 1.5 bln.
  2. Not all cryptoowners have Ether.
  3. Not all Ether owners have MetaMask installed thougth it is almost the only way to play crypto games.
  4. All crypto games are pay-to-play. How many free-to-play games do you have installed on your device? How many paid ones?

It is too difficult to play crypto games even for someone who are familiar to the crypto world! It is

There is one more reason. All existing crypto games could hardly be called games. The main and in most cases the only “gaming” activity is collecting. Actually, people don’t play them, they try to gamble with kitties. Crypto games are hard to play, they are “pay-to-play” and even more, they are not games at all.

Existing blockchain solutions including Ethereum can support only basic game mechanics like collecting or gambling. Every transaction takes much time and costs gas. Until we have new much better solutions, we have to consider creating games which combine blockchain with classical architecture.

Lordmancer II is a good example of such combination. On one hand, the game uses ERC20 tokens Lord Coins as in-game currency for trading between players. It brings real values to in-game items. On the other hand, everyone can easily start playing the game as in general it is a free-to-play mobile game. You don’t need to have a PC, Metamask or even Ether or Lord Coins to launch the game. Just download it and play! If you decide to use crypto currency related features of Lordmancer II, you can either buy some Lord Coins (yes, there you will have to create an Ethereum wallet and register on an Exchange) or try to earn crypto tokens by selling game resources and items to other players.

Lordmancer II will bring new users to the crypto world. Players will have a chance for a “soft” start in the cryptocurrency sphere. Let’s make the crypto community bigger!

Lordmancer II is in open beta test now and can be downloaded on

Join Lordmancer II Telegram community here

What’s wrong with existing crypto games? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

How to Create an Agile Community of Practice

2018-05-16 14:15:44

TL;DR: Creating an Agile Community of Practice

Creating an agile community of practice helps winning hearts and minds within the organization as it provides authenticity to the agile transition — signaling that the effort is not merely another management fad. Read more to learn how to get your agile community going even without a dedicated budget.

How to Become an Agile Organization — When the Plan Meets Reality

Typically, the recipe for becoming an agile organization goes somehow like this: you need the commitment from the C-level to change the culture of the organization and thus its trajectory. You also need strong support from the people in the trenches who want to become autonomous, improve their mastery, and serve a purpose. Then — in a concerted effort — the top and the bottom of the hierarchy can motivate the middle management to turn into servant leaders. (Or, probably, get rid of them Haier style.)

Accordingly, an action plan often starts with hiring a consultancy to help figure out a more actionable roll-out plan, mostly comprising of training and workshops, initial team building activities, and probably some audits concerning financial reporting requirements, technology or governance issues.

What this kind of orchestrated initiative often neglects is the grassroots part of any successful change: provide room and resources to the members of the organizations to engage in a self-directed way with the change process itself.

A successful agile transition needs an agile community of practice.

The Purpose of an Agile Community of Practice

The purpose of an agile community of practice has two dimensions:

  1. Internally, it serves in an educational capacity for agile practitioners and change agents. There is no need to reinvent the wheel at team-level; regularly sharing what has proven successful or a failure in the context of the transition will significantly ease the burden of learning.
  2. Externally, the agile community of practice contributes to selling ‘agile’ to the rest of the organization by informing and educating its members. The members of the agile community also serve as the first servant leaders and thus as role models for what becoming agile will mean in practice. They bring authenticity to the endeavor.

Winning hearts and minds by being supportive and acting as a good example day in, day out, is a laborious and less glamorous task. It requires persistence — and being prepared not to take a ‘no’ for an answer but try again. Reaching the tipping point of the agile transition will likely be a slow undertaking with few signs of progress in the beginning. (Moreover, management tends to underestimate the inherent latency.)

If you enjoyed the article, do me a favor and smack the 👏👏 👏 up to 50 times — your support means the world to me!

If you prefer a notification by email, please sign-up for my weekly newsletter and join 17,129 peers.

A Portfolio of Services and Offerings of an Agile Community of Practice

Internal Offerings, Serving the Community

Improving the level of mastery of the members of the agile community of practice is not rocket science. My top picks are as follows:

  • Sharing is caring: The hoarding of information is one of the worst anti-patterns of an agile practitioner. Hence share everything, for example, retrospective exercises LINK, to information resources (newsletter, blog posts, etc.) to working materiel and supplies. A wiki might be the right place to start.
  • Training and education: Organize regular workshops among the agile practitioners to train each other. If not everyone is co-located, record the training for later use. (Webinar software has proven to be helpful with that.) If you have a budget available, invite the Marty Cagans to the organization to train the trainers.
  • Organize events: Have regular monthly events for the agile practitioners and others from the organization and host meetups with external speakers. Make sure that all practitioners meet at least once a quarter in person for a day-long mini-conference.
  • The annual conference: Consider hosting an organization-wide yearly ‘State of Agile’ conference to share lessons learned, success stories and failures.
  • Communication: Use a Slack group to foster friction-less communication among the community members.
  • Procurement: Find a workaround to allow non-listed suppliers to provide supplies such as special pens or stickies. (Probably, there is a freelancer or contractor among the practitioners who can help with that.)

Agile Transition — A Manual from the Trenches

The latest, 225 pages strong version of “Agile Transition — A Hands-on Manual from the Trenches w/ Checklists” is available right here, and it is free!

Download the ‘Agile Transition — A Hands-on Guide from the Trenches’ Ebook for Free

External Offerings, Serving the Organization

Generally, what is working for the agile community of practices is also suitable for the members of the organization, probably with a different focus, though. Try, for example, the following:

  • Provide training: Provide hands-on training classes in close collaboration with the change agents. Consider a less demanding format that a typical day-long training class — a focused one-hour class in the late afternoon may prove to be just the right format for your organization. (Tip: Avoid the necessity for participants to apply somewhere to be allowed to the class. That will massively improve attendance rates.) Also, consider offering a kind of curriculum that is comprised of several of those light-weight classes.
  • Communication: Consider running a website or blog beyond the agile community of practice’s wiki to promote the organization’s path to becoming an agile organization. The best means I have encountered so far to foster engagement among change agents and early adopters is a weekly or bi-weekly newsletter within the organization.
  • Make ‘agile’ mandatory for new colleagues: Educate all new hires on agile principles and practices to support the repositioning of the company.
  • Gain visibility: Selling Agile to the organization to win hearts & minds is best achieved by making ‘agile’ tangible at a low-risk level for the individual. For example, organize regularly lean coffee sessions or knowledge cafés thus providing a safe environment to check this ‘agile’ thing out. Invite people directly to ceremonies, for example, sprint reviews — if you practice scrum — that might be of interest to them. (Guerilla advertising is welcome.) Lastly, why not offer an informal way of contacting agile coaches and change agents? Some people shy away from asking supposedly stupid questions in the open and may be hard to reach otherwise.
  • Provide transparency: Occupy a space at a highly frequented part of a building or the campus to show what ‘agile’ is about and provide an overview of practices, courses, regular events, etc.
  • Host events: Try to organize regular events for the organization, for example, providing lessons learned from teams that are spearheading the agile transition. Create a schedule in advance and stick to it. Perseverance is critical to fighting the notion that becoming agile is merely a management fad that will go away soon.

Overcoming Resistance to an Agile Community of Practice

So far, I have not yet witnessed open pushback from an organization about the creation of an agile community of practice. More likely, you will encounter complacency or ignorance at the management level. Sometimes, the budgeting process will be utilized — willingly or not — to impede the creation of a community.

But even when you are financially restrained, there is still enough room to move the agile community of practice ahead. There are several services available for free that provide video conferencing and hosting, blogs, event organization, or newsletter services. In my experience, it is less a question of available funds, but you need to overcome your anxiety and get going without waiting for written approval from whomever. Assuming accountability as an agile practitioner by starting a community and thus moving the transition forward sounds agile to me.

Creating an Agile Community of Practice — Conclusion

Creating an agile community of practice is a vital part of the process of becoming an agile organization. It provides a lot of the groundwork that is necessary to convince the members of the organization that becoming agile is neither hazardous nor a fad but a trend and thus an excellent chance for everyone involved.

Do you have an agile community of practice? If so, what practices have been successful in your organization? Please share with us in the comments.

📅 Upcoming Webinars

Download your invitations now — there are no more than 100 seats available:

Note: All webinars are aired from 06:00 to 07:00 PM CEST. (That is 12:00 to 01:00 PM EDT or 09:00 to 10:00 AM PDT.)

📺 Subscribe to Our Brand-New Youtube Channel

Now available on the Age of Product Youtube channel:

✋ Do Not Miss Out: Join the 3,200-plus Strong ‘Hands-on Agile’ Slack Team

I invite you to join the “Hands-on Agile” Slack team and enjoy the benefits of a fast-growing, vibrant community of agile practitioners from around the world.

If you like to join now all you have to do now is provide your credentials via this Google form, and I will sign you up. By the way, it is free.

🎓 Do you want to read more like this?

Well, then:

How to Create an Agile Community of Practice was first published on Age-of-Product.

How to Create an Agile Community of Practice was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more


2018-05-16 07:46:01


Computer Science is a perfect career only if you are willing to work really hard in it. I have heard a lot a people who are trying for engineering are not interested in Computer Science. The main reason behind this they assume is that they do not have sufficient knowledge of Computer.

In today’s world of digital information, everyone is busy offering customer services or their product at free or minimal rates which we can all afford. The social website like Facebook and searching platform like Google provides free of services. But what we do not know is that nobody in this world wants to work for free. Hence, they end up collecting information about the users and end them selling up to the third-party companies to make money. Even the NSA, USA keeps track of all the people in the world, watching their daily activities with the help of social media feed, call logs, payments logs etc. Their aim is to prevent the terrorist crime but their method of invading the privacy of others is very wrong. Hence, we can see from the above context that maximum of big corporations is busy distributing our data and finally making the privacy and security just an illusion for the world.

If we take Computer Science as a field or learn to code, then we can make these things right which is so wrong in this world. Many of the people from computer science do not bother by these things and try to work for huge money. But why would we do that? Let me give you a few examples

There was a child who was a nerd, programmer from the beginning. He, in fact, hacked a few sites at the age of 14 where he ended up meeting some of the hackers twice his age. He went to MIT where he saw a problem. The problem was that every student from MIT who wants to write a research paper has to read other’s research paper to understand what is best. Hence, he has to pay a certain amount to read that research paper and the same procedure is applied to thousands of students. It was estimated that MIT made billions from this. He decided to provide these research paper for free. He went to the server room which was installed in the basement of MIT.

He wrote a simple read and fetch script in python which would fetch the research paper once at a time. He would change the hard disk in which these papers were stored, once a month. Then he uploaded these research paper on the website for free which resulted in the loss of billions of dollars of MIT. As a result, MIT filed a case against him and he went to court. There also he faced once more issue that certain amount of money had to be given to the court and then the documents for that court case would be given. This thing should be free but made American courts billions of dollars. He ran the same program this time fetching the court cases. This resulted in the loss of billions of dollars. He was caught and sentenced. He committed suicide soon after. He was the owner of Reddit, a popular forum where people posted and discussed the views. He only wanted to give free education to all and make life easier.

In another story is the man who also was pretty good at coding. He was selected at CIA, USA. He performed a series of task and went up the level pretty fast. He was soon sent to NSA where he worked on a software named HeartBeat whose work was to organize the data properly that was collected by the NSA from several regions. But after some time, he felt that the work NSA is doing by invading privacy is not the complete solution. Hence, he tried to leak the Top-Secret NSA documents to prove that NSA has been spying on the world for some years now. He was declared a terrorist and is currently in Russia. He is no other that Edward Snowden and the documents that he released came to known as Snowden leaks. He believed that privacy is very important and everyone should have it.


Why am I telling you all these? Because these people could have made billions if they focused towards earning but they choose to stop the wrong and correct it on their own. I can tell you that there are dozens of hackers and programmers out there who have done so much for us that we don’t even know about it. Hence, it is our duty as a programmer, coder, developer and as an individual that we should acknowledge their work, learn what they created and take it farther. Make something better out of it which can help the community who has given us so much. It’s not that we should go fighting but we can support them from behind the curtains, motivate them and help them in any way that they need.

But most of us think that we cannot code as perfectly as they can. Hell, most of us cannot complete a project on their own. But we can help them by providing the information to our friends, helping them with documentation, small bug fixes, and patches. Hence, it is up to us to make our world a better place and proving that security isn’t just an illusion. The world needs us and it’s high time that we accept this fact too. In reality, we got for fat paychecks and works for the company who may be destroying our community.


You are smart. The rest of decision is up to you to take.


COMPUTER SCIENCE: EXPECTATION OF COMMUNITY VS REALITY was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

7 AWS Services to Improve the Efficiency of Your App Development

2018-05-16 07:32:35

Amazon Web Services (AWS) is presently the leading public cloud services provider in the world, offering more than 100 AWS services spanning over 19 categories.

Image Credit: Amazon Web Services

Lets look at 7 important services offered by AWS that would be perfect for your next mobile app development project.

1. AWS Lambda

Recently, the term “serverless” has been doing the rounds. It is not that servers do not exist, they are there, but you don’t have to manage them anymore. With AWS Lambda, you can run the code according to events without provisioning the servers or managing them. Just upload the code and Lambda will manage the rest, including scalability.

The user doesn’t have to indulge in server administration. The code can be set up to automatically trigger from other AWS services or it can be called directly from any web or mobile app.

How it Works:

The code (referred to as “Lambda function”) is uploaded to AWS Lambda. Each function includes the code and its associated configuration information, including the function name and resource requirements. These functions can be set to automatically run the code in response to multiple events.

Image Credit: Amazon Web Services

With no connections in the underlying infrastructure, Lambda functions are “stateless”. There’s complete freedom to launch any number of copies of the function, making scalability possible and easy. AWS Lambda executes the code only when it’s needed, and scales as per demand, automatically.

It runs your code in high availability compute infrastructure and performs the following functions − server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging.

Key Features:

Add custom logic to other AWS services: You can add custom logic to AWS resources like Amazon S3 buckets and Amazon DynamoDB tables. This is an amazing feature that lets you apply compute data while it enters and exits the cloud.

Build custom back-end services for apps: This feature lets you can create events for any kind of platform, making it easier for you to make updates and changes. This reduces battery drain as well since the backend applications are triggered on-demand.

No new languages, tools, or frameworks to learn: It supports code written in several programming languages. This saves the developer time, as he doesn’t have to learn any new languages or frameworks, and has the freedom to use third-party libraries too.

Administration is completely automated: Whenever you need to update, release a new patch, add new servers or resize the current ones, you can rely on Lambda to get it done.

Built-in fault tolerance for reliable operational performance: To ensure code protection across multiple machines, data centres and even individual centerstage, Lambda ensures built-in fault tolerance.

Automatic scaling based on incoming requests: Lambda automatically invokes your code to scale and support all the requests that come in, even at spike periods, and with no compromise in performance or consistency.

Integrated security model: You can configure AWS Lambda to access resources behind a virtual private cloud. This lets you leverage the advantages of custom security groups and network access control lists too.

Flexible resource allocation: Gives you the freedom to allocate resources proportional to CPU power, network bandwidth, and disk I/O as per your functions.

2. Amazon DynamoDB

Amazon DynamoDB is a fully managed NoSQL cloud database service that would help you store and retrieve any amount of data or traffic depending on demand. The database service is extremely fast, providing consistent, single-digit millisecond latency at any scale.

Since it allows for automatic partitioning, scalability is a major highlight, and any product needing data, volume, growth, etc. can go on without any kind of user interference. The reliability and scalability advantages of the service make it an ideal choice for gaming, web, ad tech, mobile, IoT apps and several other applications. It is also specially adapted for supporting multiple document types and key value store models.

How it Works:

As DynamoDB works with Java, Javascript, Node.js, PHP, Python and Ruby and NET, you can decide on your favorite programming language before getting started with it. Once you create the database table, you can also set the target utilization for auto scaling.

You can configure the service to handle various database management tasks like hardware or software provisioning, setup and configuration, software patching, operating a distributed database cluster and partitioning data over multiple instances for scaling purposes.

Key Features:

DynamoDB Accelerator (DAX) in-memory cache: DynamoDB Accelerator (DAX) is a fully- managed in-memory cache for DynamoDB and it helps reduce time from milliseconds to microseconds. This is a time-saver for demanding applications, as it improves performance by 10x.

Global tables for a multi-region, multi-master database: It capitalizes on the fully managed, multi-region, and multi-master database to deliver fast local, read and write performance for global applications that require heavy scalability.

Full backups and restore: Full backup of data in Amazon DynamoDB table means no slowing down of performance as the entire backup process is done in seconds, irrespective of the size of the tables.

Secure encryption: The service promises fully managed encryption at rest using AWS KMS or Key Management Service. It removes all operational hassles and complexities when you want to store and save sensitive data.

Automatic, seamless scaling: This allows for automatic throughput capacity management in response to changing demands, with zero downtime. It looks at the actual traffic patterns and is prepared to handle sudden splurge in traffic.

Support for key-value data structures and documents: It provides support for querying, and updating collections of objects identified using nuances like keys and values to show where the actual content is stored. It also supports the storing, querying, and updating of documents.

Amazon DynamoDB Console and APIs: It allows you to easily create and update tables, monitor tables, add or delete items, and set alarms. You can also define the time in which particular items in the table must be available, after which they will expire.

Develop locally, scale globally: It is easy to develop and test applications locally, on your laptop or in the Amazon C2 Instance. And when you are ready to scale, you can easily upload it in the AWS Cloud with Amazon DynamoDB.

DynamoDB Streams to track changes: DynamoDB Streams makes it easier for you to capture and process changes to DynamoDB data and tables. This would help you track and resolve issues quickly.

Triggers: AWS Lambda can interact with DynamoDB to execute a custom function when there are changes detected in a table.

3. AWS Device Farm

AWS Device Farm is an app testing service. You can test all kinds of apps on real mobile devices hosted by AWS. You can use the service in two different ways:

  • Automated Testing — Test the app parallelly against many physical devices in the AWS Cloud
  • Remote Access — Interact with devices in real time from a web browser

You can test your iOS, Android, and web applications against real mobile devices in the cloud app, so it would look like real users are using them, with real gestures, swiping and interactions. You can view video, screenshots, logs and performance data of your app, so you can instantly know what to fix, and where.

How Automated Testing Works:

To make use of the functionalities of AWS Device Farm, you have to first select the app type — native, hybrid or web app. Developers can then access the devices directly through their local host machines to identify bugs and performance issues.

How Remote Access works:

When you select a particular device based on make, model and OS version, it will be displayed on your browser. The developer can interact with the device to reproduce issues or test new functionality.

Key Features:

Test accuracy: Tests are highly accurate because you can use real, physical devices. So you can test your Android, iOS or Fire OS app on devices hosted by AWS, after uploading your own tests or built-in compatibility tests.

Configure and simulate real world environment: You can stimulate dynamic environments by changing the parameters of your test scripts. You can test apps in real time on real devices to gauge real world customer scenarios through a varied set of device configurations.

Development workflow integration: Device Farm comes with service plugins and API to help initiate the test automatically, and get the results from CI (Continuous Integration) environments and IDEs.

Faster way to reproduce and fix bugs: You can manually create issues and run automated tests in parallel, after which you can check the videos, logs and performance data to understand where the problems lie.

Secure testing: You can perform a series of clean up tasks, including uninstallation of the app after test execution. The device will also be removed, and not available. This ensures safety and security of your app.

4. Amazon Cognito

Amazon Cognito is a user-state synchronization service that lets you create unique identities for your users. It enables secure app authentication, allowing developers to easily add user sign-up, sign-in, and access control for web and mobile apps.

How it Works:

Once you sign up for an AWS account, all you need to do is add the Amazon Cognito SDK to your application and write a few lines of code. Next, you can initialise the Cognito credentials provider and use the Amazon Cognito management console to create a new Identity Pool.

The Cognito credentials provider will generate or retrieve unique identifiers for your users, set authentication tokens, set the verification process to identify and verify the identities and generate a valid token when the user sends in the details.

Key Features:

Secure directory: Amazon Cognito User Pools generate a secure user directory that has an amazing scalability of hundreds of millions of users. Authentication tokens are set to validate each user.

Social and enterprise identity providers: App users can sign in through social identity providers (Google, Facebook, Amazon, etc.) and through enterprise identity providers like Microsoft Active Directory through Security Assertion Markup Language 2.0 (SAML).

Built-in customisable UI: Android, iOS, and JavaScript SDKs for Amazon Cognito helps you add user sign-up and sign-in pages to your applications.

Standards based authentication: It uses common identity management standards like OpenID Connect, OAuth 2.0 and SAML 2.0. This gives users a temporary set of limited credentials to access your AWS resources.

Security and compliance: You can meet several security and compliance requirements even for applications requiring high-level security. It is HIPAA eligible and has PCI DSS, SOC, and ISO/IEC 27001, ISO/EIC 27017, ISO/EIC 27018, and ISO 9001 compliance as well.

5. Amazon Pinpoint

If you have been relying on targeted push notifications to increase mobile engagement, Amazon Pinpoint just makes it easier to not only run targeted campaigns, but measure results as well. You can also engage your users through SMS, email and mobile push messages including:

  • Targeted messages like promotional alerts or customer retention campaigns
  • Direct messages like order confirmations or password reset messages

How it Works:

Through AWS Mobile SDK Integration, you can can gather customer analytics data and create customer segments. Personalized multi-channel messages can then be deployed to app users.

Key Features:

Track customer analytics: Integrate Amazon Pinpoint into your apps to track and measure usage data, understand how customers interact with the apps and how they respond to your messages.

Global reach: Enjoy a global reach of over 200 countries, and send messages through any mode of communication users prefer.

Application analytics: Learn all about how your customers use your app through AWS Mobile SDK. This data can be exported to external databases and applications for monitoring purposes.

A/B testing: A/B testing is available to ensure intended users respond to messages.

Track campaign metrics: You can monitor the messages sent to users, and analyze them to optimize future campaigns. This would include monitoring how many of them acted on the message, how many ignored, and how many opted out.

6. Amazon S3

Amazon Simple Storage Service (S3) is a cloud object storage service to store and retrieve data from anywhere. It can collect data from corporate applications, mobile apps, websites, IoT sensors and other devices.

S3 has different storage classes designed for different purposes:

  • S3 Standard for general-purpose storage of frequently accessed data
  • S3 Standard-Infrequent Access and S3 One Zone-Infrequent Access for long-lived data that will be less frequently accessed
  • Amazon Glacier for long-term archive

There are different kinds of data policies that would help determine the process through which data is managed. The user can select the concerned storage plan, and migrate all the data there without making any changes to the end application.

How it Works:

Data is stored as objects within resources referred to as “buckets”. A bucket can store as many objects as required, with each object being up to 5TB in size. The user can read, write and delete the objects within the bucket through access controls set individually for each object or for a bucket as a whole.

User permissions can be set for accessing the objects within the bucket and the access logs that can be viewed at any time. The user can choose any AWS Region bucket to store to minimize latency and costs, and stay within regulations.

Key Features:

Data availability, durability and scalability: You can have backup of all your data and save it against malicious or accidental deletion. You can control who can access your data through control mechanisms and VPC Endpoints, making it reliable.

Query in place functionality: Amazon comes with a suite of tools that would help you run Big Data analytics on the stored data.

Flexible data management: Amazon S3 helps you manage your storage by providing actionable insights regarding usage patterns.

Comprehensive security and compliance capabilities: It has three different forms of encryption to ensure the user gets their desired levels of security. It supports security standards and compliance certifications like PCI-DSS, HIPAA/HITECH, FedRAMP, EU Data Protection Directive and FISMA.

Simple, reliable APIs for easy data transfer: The flexible data transfer feature of S3 makes it easy to transfer large amounts of data through a dedicated network connection.

7. Amazon CloudFront

Amazon CloudFront is a global content delivery network (CDN) service offered by AWS. It securely delivers data, videos, applications, and APIs to your viewers quickly with high transfer speed and low latency. There’s a global network of 116 Points of Presence in 56 cities across 24 countries in North America, South America, Asia, Australia and Europe.

How it Works:

Static and dynamic files stored in origin servers are distributed to the end user through Amazon CloudFront. The origin server is registered with CloudFront using an API call, which returns a domain name that can be used to distribute the content.

The content requests will be routed to a suitable edge location that will retrieve the file’s local copy. If a local copy is not available, then a copy will be taken from the origin server server.

Key Features:

High-end security: Commendable levels of security is ensured through the end-to-end HTTPS connection between the end user and the origin server.

Media streaming: Just like with other aspects of AWS, media streaming is made possible through a variety of options, and is implemented using a huge variety of protocols layered on top of HTTP.

Programmable content delivery with AWS Lambda: This lets you configure multiple origin servers and cache behaviours based on the usage pattern of your application.

Reporting and analytics tools: CloudFront offers number of options for all your reporting needs, right from tracking your most popular objects to learning more about your end users.


These AWS services provide a varied set of infrastructure services delivered as a utility, on demand with the advantage of pay-as-you-go pricing. This is a major boon for enterprises because they can instantly respond to changing business requirements and never falter when the demand goes up during peak seasons. You can make use of these AWS services to revolutionize your mobile app development, scale apps seamlessly and make them work optimally.

Interested in incorporating AWS cloud services in your next mobile app? We can guide you!

Contact Us Today!

Originally published at Cabot Solutions on May 15, 2018.

7 AWS Services to Improve the Efficiency of Your App Development was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Five special announcements of Google I/O’18: Google Keynote

2018-05-16 04:37:18

All of we know that Google I/O 2018 keynote event is successfully happened. On that event we’ve seen many new products of Google those were just wow.

In last Google I/O 2017 we’ve seen the products in which see the wonderful contributions of AI but in this year the impact of AI has become more powerful. So,Let’s see.

1. Predicting cardiovascular risk: First of all Google AI has focused on Healthcare as most important field. Last year Google announced the work on Diagnosing Diabetic Retinopathy to help doctor for diagnosis by retina scan and deep learning. In this year we have come to know that Google AI has started to work on predicting cardiovascular risk. By same retina scan Google can predict the risk of cardiovascular like stroke or heart attack. Moreover, Google AI has started to predict medical treatments to doctors.

2.Smart Compose : Google’s product Gmail has been redesigned.They have been added a new feature called smart compose.

In smart compose when a user start to compose ML will suggest him/her with phases that he/she can compose easily.

3. Suggested actions: Suggested actions is a new feature of Google Photos in which the AI system will suggest right action to fix the contrast, brightness of the photos. Another interesting feature is if we take a photo of a document then Google will covert the document into pdf. In google photos AI makes the photos more beautiful and natural.

4. Wonderful Google Assistant : There are 6 new voices available in Google assistant. Google Assistant is now naturally conversational and visually assistive. Google assistant has introduced continued conversation and multiple actions features. Maximum small business has no good online appointment system. In this case Google Assistant has introduced New AI based appointment system.

5. Android P: Android P is the new version of Android.New features in Android P is Adaptive Battery that uses ML to predict which apps will be used in next few hours, App actions that predicts the next task of a user. Slices and ML are new API for developers.

So here is the short description of the latest cool products of Google. I hope you’ll enjoy it.

Five special announcements of Google I/O’18: Google Keynote was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Why Developers Should NOT Use MacBook Pro

2018-05-16 04:13:40

About ten years ago there was an article claiming that “Every Developer should have a MacBook Pro”, which listed a bunch of reasons, for example:

  • Better hardware/OS design. MacBook Pro has the most usable touch pad which can completely replace your mouse. Some OS features are focusing on convenience as well, such as the spotlight.
  • Unix-like. the convenient, native terminal makes it possible to use almost every piece of software from Unix/Linux.
  • Amazing software ecosystem. Apps for OSX are usually designed better, much more convenient than apps on Windows, especially the apps for designers. App Store also makes software purchase much easier.

Yes these reasons are all true, even today. I would still be swiping happily on my touch pad, using spotlight to find my apps, making diagrams with Omnigraffle, install dev tools with brew install, if I was still using my MacBook Pro. However, I’ve changed my preference and now I’m using Thinkpad + Ubuntu. I will explain why and I feel these reasons are true for most developers who use Linux.

The Killer Feature: Touch Bar

By the end of 2016 Apple released a new generation of MacBook Pro equipped with “Touch Bar”. I agree that touch bar was a great design concept and helped many users who were not comfortable with the traditional function keys, but instead of calling it a “killer feature” as media did, I would say touch bar killed MacBook Pro itself.

I am a heavy Vim user and I believe many developers are too. The most important key for vim users is the “Esc” key. I need to press “Esc” evey a few seconds, of course without looking at where it is, and most important, I need to feel the physical key so that I know the key press was successful. However touch bar removed Esc key completely. This makes vim much harder to use.

Touch bar also removed the function keys which are very useful for debugging. PyCharm, WebStorm, Android Studio, or even Chrome Developer Tools, all these debug tools use F5~F11 as debug hot key. And again I need to “feel” the physical keys to make sure I am hitting the correct key. With touch bar, all of these are gone.

A Better Linux

Linux has evolved a lot in recent years. I have been using Ubuntu for about one year and upgraded to Ubuntu 18.04 on the second day after its release. Ubuntu satisfies most of my daily needs. Look at what I am using every day:

  • IDE: VSCode / PyCharm / WebStorm, or simply Vim
  • Browser: Chrome / Firefox
  • IM: Slack / Skype / Telegram
  • Office: LibreOffice

All of these apps have Linux versions and they just work as good as on Mac OSX. LibreOffice is an exception because MS Office does not support Linux and, yes, MS Office and iWork are much better than LibreOffice. But LibreOffice is much stable today, and since I don’t make document every day, I feel it is good enough for me.

The only inconvenient I found in Linux is that I don’t have an app for making diagrams. Omnigraffle is way better than Inkscape. But luckily I don’t need to do that frequently either so I can just live with LibreOffice Draw or Google Drawings.

Software Limitation

Some apps do not perform well under Mac OSX. One of the most important reason I migrated to Ubuntu is the limitation of VirtualBox.

VirtualBox is a free virtual machine software. In our workplace, we use vagrant, which is essentially a wrapper for VirtualBox. When I was still using MacBook Pro, I wrote code on my MacBook Pro and ran the code on Ubuntu running in VirtualBox. Then the problem was how to synchronize the changed files to VirtualBox. I tried some solutions but none of them work well:

  • Setting up a Samba in VirtualBox and edit code remotely. Technically this works but since we need to watch the file changes in the VirtualBox in order to rebuild the project, the changes saved through Samba would not trigger the rebuild immediately. Usually I have to wait for about 20s after saving my changes before the project rebuild.
  • Use NFS to export the code on the MacBook Pro, and mount it as an NFS drive in VirtualBox. But NFS access is too slow and the rebuild would require 3x-5x more time than usual.
  • Use VirtualBox Shared Folder to map the code directory from MacBook Pro to VirtualBox. Again this approach did not work because of slow access speed.

Well I could not blame OSX because this was actually a problem of VirtualBox, but anyway the root reason was that our develop environment could not run directly on OSX.

Eventually I ended up wiping the OSX and installed Ubuntu on my MacBook Pro, and all the problems were gone.


In this post I explained why I think MacBook Pro is no longer a good choice for developers. Undoubtedly it is still a great laptop for UI/UX designers, Product Managers or people who need to do a lot of design / document work, but developers have better choice.

So what’s the choice? Just use any laptop and install Linux!

Thanks for reading. If you have the same feeling as me please clap and consider sharing this post!

Why Developers Should NOT Use MacBook Pro was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Digital Asset Ownership: Why It Matters & How Blockchain Will Save The Day

2018-05-16 03:41:32

In the pre-digital age when retailers sold all their goods in brick-and-mortar buildings, farmer’s market booths or lemonade stands, the retailer owned and was responsible for all their company’s assets.

Their customers knew them, and they controlled everything about how they presented themselves to the marketplace. Merchants, especially those who produced their own merchandise, knew what their customers wanted and how to present their goods.

That character wasn’t simply a matter of pride — it’s how local entrepreneurs worked. It’s the basis for capitalism.

I don’t have to tell you how the digital age changed how we shop. But it also changed how we sell.

Today, small merchants must play a game run by forces they simply aren’t able to compete with: the online behemoths — Amazon, eBay and others. They must play by the rules of these platforms to have a chance at survival.

And while it’s relatively easy (though not necessarily cheap) for a small business to get started as a seller on these big sites, it’s not a playing field meant to benefit them.

One big reason is that small businesses no longer own their own shop windows in a digital sense. The platforms do.

These digital assets include product images, descriptions, customer reviews, and how the page is structured for viewing. A small business with an Amazon seller account has some input in how their product is presented — what images and descriptions to use, and what price to sell. But the final say is Amazon’s, because they own every single digital asset on their platform.

This gives Amazon massive leverage over their business that free enterprise never intended. It’s a monopoly in joint-ownership’s clothing.

Because retailers don’t own their digital assets on Amazon, their algorithms can manipulate your page as they deem fit. They can heighten or lessen its presence according to their own market assessments. They can even put a competitor’s product on your page as a recommended option.

That’s a lot of advertising placed at extremely important parts of a product page. Side note: I have 2 cats.

Think about that one. Have you ever gone to a neighborhood hardware store and seen a poster for a hammer for sale at a competitor’s store down the street? Have you ever wandered past a McDonald's and seen ads for the Whopper in the window? We’re dealing with that level of ridiculousness here.

Amazon can describe that as “offering choices to our customers.” But, it’s not. It’s controlling the entire market — and degrading some retailers’ product value, while upgrading the value of the highest bidder (ie. their competition).

That’s why ownership of digital assets is a hugely important issue in today’s e-commerce ecosystem. Small merchants using Amazon and other big platforms have almost no option but to give that control over to corporate entities whose sole focus is rewarding themselves and their stockholders.

Is that worth the increased traffic and lower ROI?

No. And that’s why the new frontiers of blockchain are bringing the rights of small businesses into the spotlight. It’s imperative for small business owners to own as much of their digital commodities — assets, technology, and data — as they possibly can.

With blockchain technology, all ownership of digital assets can be recorded in a trustless environment to be bought, sold, leased, or transferred as the owner sees fit.

One promising project, ECoinmerce, is working on a solution to solve this exact issue. The ECoinmerce e-commerce platform allows all business owners to retain their own digital assets. They decide what goes on their landing pages, and maintain control of how it’s accessed, amended and presented.

With no centralized team, stockholders to answer to, and no commission on sellers’ profits, ECoinmerce only sets up a framework for storefronts to exist. Sellers decide the most important elements that their customers need to see. They control the appearance. Their marketing prerogatives drive their product pages; their own customer data lets them set inventory and priority.

Add that to the security of data on the blockchain, the flexibility of cryptocurrency and a self-regulating community, and you’ll see how ECoinmerce lets businesses control their destiny without intrusion or the capricious conclusions of an artificial algorithm designed by somebody else.

ECoinmerce is building an e-commerce platform to make small business owners’ investments into their own brand truly valuable — without taking away what they’ve worked hard to build.

This kind of innovation is why we have a free market in the first place. This kind of innovation is finally bringing a level playing field back to e-commerce.

Thoughts? What else is blockchain improving? Let me know what your think in the comment section below or start a discussion at your next meetup.

Digital Asset Ownership: Why It Matters & How Blockchain Will Save The Day was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more