Ethereum will pass Bitcoin in 2018: my cryptocurrency investment portfolio

2018-01-12 18:44:16

In the last few days, many have asked about my investment strategy and portfolio mix after writing 95Percent’s: Blockchain Technology.

After much deliberation, in this post, I’ve decided to share my holdings with you. Perhaps more importantly, I’ve decided to also share my underlying philosophy. As a reminder, I know nothing. None of this should be construed as investment advice, and you should do your own research before making any investments. I would be financially okay if I lose all of my invested money: you should make sure you could survive a total loss before investing any funds.

But enough of that, how should you approach investing in cryptocurrencies? First, I advocate creating your own investment tenets. Tenets are also a crucial aspect of the product management process. I recommend creating tenets before diving into any business, project or problem.

Why do we write tenets?

  • Tenets are used to make hard decisions
  • Each tenet expresses the conflict arising from two (or more) competing philosophies
  • Each tenet ultimately demonstrates preference for one philosophy over others

Most people have their own philosophies and preferences, but they don’t write them down. Writing them out is crucial because it crystalizes your thinking. Tenets are helpful when times are good and indispensable when things get tough. You should debate your tenets heavily with family, friends, and yourself. Below I share five of my cryptocurrency investment tenets:

Jason’s Cryptocurrency Tenets [January 2018]

1) I will prioritize platform investments (think Ethereum) over application investments (think Dash). Strong infrastructure scales and changes the world. Successful applications are hard to predict and are not stable over time. Platforms better withstand changing customer needs.

2) I will choose cryptocurrencies with user adoption and strong focus on user adoption over cryptocurrencies with the latest tech or prettiest whitepapers.

3) I will take the super-long term view. I will prioritize cryptocurrency that have the potential to be trillion dollar businesses and will stay away from currencies with more barriers to widespread adoption. If a cryptocurrency is unlikely to ever be used en masse, I won’t buy it. I investment in fundamentals not merely public opinion.

4) I will greatly value signals in the market, especially signals from entities with inside information and large investment positions — potentially over even my own analysis.

5) I value cryptocurrencies that demonstrate the ability to change direction, pivot quickly and make decisions over cryptocurrencies that emphasize status quo, tradition, and moving wisely but slowly. I recognize this is partly a function of team structure and leadership.

After painstakingly working through my tenets, I’ve researched many of the cryptocoins available today. Based on my personal investment philosophy and this research, I’ve made several investments over the last month. Here’s my positions as of January 11th 2018:

Jason’s Cryptocurrency Portfolio* as of January 11th 2018:

  • Ethereum: 50%
  • Stellar: 20%
  • Neo: 20%
  • Request Network: 10%

*I’ve rounded these numbers to make them prettier.

Ethereum: 50%

As we wrote in Blockchain Technology: “Blockchain technology creates information networks. The fundamental rule of networks is that when a new person joins any network, the network becomes exponentially more valuable. As a corollary, each time another person joins a widely-used network, it becomes exponentially harder for competing networks to offer similar value to people. You use Facebook because all of your friends are on the platform. You are less likely to use a new social network because few of your friends would be on it. As a result, networks tend to produce winner-takes-all markets. Facebook, WeChat and a few other businesses, for example, dominate the social networking space. We expect a similar winner-take-all outcome for blockchain technology. So far, founders have created many hundreds of digital coins. They will create thousands more over the next few years. We expect a handful of these digital coins to successfully walk out onto the global stage, while the vast majority of these coins will ultimately become valueless.”

Amazon is a platform. Facebook is a platform. Platforms dominate the internet. As we’ve seen from Ethereum’s creation of the Initial Coin Offering (ICO) platform, platform coins will dominate the blockchain coin as well. We’ll see many cryptocoins repositioning themselves as platform coins, especially starting in the second half of 2018 and into 2019 when many smaller, more niche coins start to flame out. Based on my research, Ethereum is currently best positioned to win the platform war. Pure and simple. I may change my view in the next few months or quarters but for now Ethereum gets the majority of my money.

Stellar: 20%

Stellar is a platform that wants to make it really easy for companies to ICO (versus using Ethereum). Stellar is ultra-focused on this use case, but that’s okay, because this use case is massive.

Again, per Blockchain Technology, “[blockchain] technology can also make physical-world assets more liquid (easier to sell and buy) by making them more reducible. In other words, the blockchain better facilitates ownership of assets across multiple people… while mega-companies (e.g., Amazon, AirBnB) have successfully built their own digital marketplaces in the past, blockchain provides the available-to-all, trust-building, low-cost financial infrastructure via smart contracts, secure transactions, and an authoritative ledger to [almost anyone]…Unlike crowd-funding sites like Kickstarter, where early backers receive nothing but a product or service, ICOs let entities actually own part of meaningful ideas.”

The tokenization of assets via blockchain is going to change the world. So far, this use case is the only one Ethereum has proved it can solve and I find it possible that Stellar eats some of Ethereum’s pie: I am watching Stellar carefully. Stellar focuses on usability (think: MVP) instead of extensibility (think: useless features). The founder started Mt. GOX and built the initial framework for Ripple. Stellar is backed by Stripe and has support from top advisors in tech.

Neo: 20%

Over the last decade, China has made it clear that they want to build their own solutions to world problems. I expect this trend to continue into the blockchain world, and expect at least a duopoly platform paradigm (at least one major smart contract platform for the West, and at least one smart contract platform for the East).

Request Network: 10%

Request Network is a platform specifically focused on the payments space (built on top of Ethereum). While the sized of the tokenization of assets space (e.g. ICO) is almost incalculable, the payments space remains enormous. Request Network is a big team bet. As a product leader, I value team organization a lot. I’ve studied the core developers of many of the top blockchain coins, and find that most projects are being run relatively poorly compared to more traditional software development projects today (partly a function of decentralization of blockchain teams). Many teams don’t have updated visions or project plans and as a result miss deadlines and seem to be prioritizing things no one wants. Request Network strikes me as agile, able to pivot quickly, and ruthlessly focused on user growth and customer experience. (I love the bi-weekly updates.). I also immensely value their time in YCombinator, the top startup incubator in the world.

Like the pre-blockchain startup world, real-life customer feedback is everything. I want a team desperate to get their coin to market. From there, they can interact with real customers and then make technical changes that are likely to lead to meaningful improvements for real customers.

Mainstream cryptocoins I am NOT invested in:


In my opinion, a huge milestone for blockchain technology will be to move away from the Bitcoin Hegemony. Right now, the cryptocurrency market as whole is psychologically entwined with Bitcoin. When Bitcoin plummets, the market plummets, although we’ve seen signs of change in the last few weeks. In 2018, I predict that Ethereum (or another platform) will surpass Bitcoin. The cryptocurrency market will finally detangle itself from Bitcoin.

I don’t find pro-Bitcoin arguments particularly strong. Initially, Bitcoin initially saw a lot of success helping entities perform discreet transactions (think: Silk Road). Currently, though, Bitcoin isn’t particularly helpful in the payments space (slow, expensive, and unfocused): the digital currency is unlikely to scale to widespread user adoption for payments. Bitcoin also can’t help with ICOs: it is not a platform. Perhaps most concerning, from a development perspective Bitcoin moves slowly, has divided leadership, and doesn’t practice user-driven development (at least compared to other digital coins). Proponents cite these characteristics as advantages and argue that Bitcoin is a store of value.

We flesh out the digital coin role as a store of value in Blockchain Technology: “Blockchain technology also has potential to provide a new independent store of value. Today, the classic independent store of value, gold, is partly valuable because humans have decided to value it independently of nation states (e.g., Canada) or nation alliances (e.g., the European Union) unlike other mainstream currencies (e.g., the United States dollar is closely tied to the success of the United States of America). Gold is generally inversely correlated with the US dollar: in other words, gold acts as a hedge against the current global financial system. Because gold is difficult to store — heavy, relatively insecure — digital blockchain-currencies represent an attractive alternative. If digital currencies become more stable over time (currently, they are extremely volatile), they may one day augment or supplement assets such as gold.”

The problem is that no one uses Bitcoin as a stable store of value today. Additionally, Bitcoin is relatively uncorrelated with the US dollar, so it doesn’t act as a particularly useful hedge. Ultimately, I think digital coins will be strong store of values, but this is far, far down the road. At that point, I find that other cryptocoins are just as likely or more likely to act as global store of values compared to Bitcoin.

I further argued that the bigger and more immediate store of value opportunity is “helping entities buy into the global financial system in the first place. In developing countries, for example, many entities are eager to shift local, unstable currencies to stable currencies such as the US dollar to better protect their wealth. Like the US dollar today, the blockchain-backed currencies that facilitate world transactions tomorrow will also naturally act as a store of value. Entities will invest in these currencies as they do the US dollar today. As a result, the same blockchain-based currencies that gain mainstream adoption for payments are also likely to gain mainstream adoption as stores of value.” We will be forever indebted to Bitcoin but 2018 will mark Ethereum passing Bitcoin, the marketing falling, and then ultimately rebounding stronger than before. The age of the Blockchain Platform is beginning.

Privacy-centric cryptocoin

While, private, fully-anonymous transactions are a large blockchain use case, coins emphasizing privacy will struggle to gain mass adoption in the long-term. I expect privacy-centric coins to bear the brunt of initial government scrutiny and regulation. I choose to make my investments on the more public side of the blockchain movement. That said, Monero would be my current pick in the privacy-centric digital coin space.


As we wrote in Blockchain Technology: “in the short-term, partial blockchain solutions [like Ripple] will become common. Already, financial institutions are creating their own private blockchain networks and producing digital coin. Participating institutions act as nodes in the blockchain, and have visibility into all transaction on the shared digital ledger.”

I like Ripple, and particularly the focus on getting customers. Like many others, however, I am concerned about the difference between the highly-valuable Ripple Payment Protocol and XRP as an investment vehicle. I also see in-house blockchain development from large institutions as meaningful competition.


Hopefully my perspective is helpful. If I helped you crystallize your own thinking, I’d very much appreciate a small donation to my Ethereum wallet.

  • Ethereum wallet: 0x81ff5029a05ce15c3b6d6e27c7d89a7c30ecaf32

Or just clap a lot =)

Again, this is not investment advice. I re-adjust my portfolio constantly based on new information and could have a completely different set of investments tomorrow. One must be careful not to be affected by sunk cost or fear of missing out biases — and strive to act as objectively as possible. Good luck.

Ethereum will pass Bitcoin in 2018: my cryptocurrency investment portfolio was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Front end journey into Drupal + Pattern Lab

2018-01-12 18:40:12

Pattern Lab is a hot topic in Drupal community nowadays, it offers a lot of perks and it promises easy life for designers, clients and finally for developers. Being modern and innovative our team surely couldn’t pass by so we watched and read several articles about and full of excitement brought it to the upcoming project. One thing I noted though was that most part of the presenters had significant knowledge of Drupal which wasn’t my case but I didn’t pay attention that time.

There are 3 reasons we liked Pattern Lab for and I will go through them sharing what we found out and which you should pay attention on while adopting the technology.:

  1. Component based approach
  2. Decoupling frontend and backend so frontend can work without knowing the guts of Drupal
  3. Nice pattern library playground which can be shown to the customer/designer for early review

1. Component based approach

With rising of frontend frameworks like React, Angular 2+, Polymer and many others it is hard to imaging non-component based front end architecture nowadays which in short means that every component has its own scope for data, view and controller logic, which are normally presented with JS, CSS and JSON or YML, meaning changes inside one component won’t affect styles and internal logic of others.

So the normal component looks like this:

Pattern Lab recommends to group assets into folders based on Atomic design methodology and provides a great support for component’s isolated dummy data giving us freedom to choose our own tools to handle JS and CSS scope which is a clever move in the sense that they cannot predict the preferred stack we normally use for that.

HTML: there is no much of complications about HTML scope so we just create own twig template for each component and then use it wherever we want through include and embed twig directives passing appropriate parameter values.

Component Data: we can choose JSON or YAML format for components dummy data. Both are great and there are easy ways to quickly migrate from one to another though we picked YAML as it looks cleaner:

JSON vs YAML example

These files are used solely to render components in the Pattern Lab UI and never appears on production. We can set multiple instances of dummy data for each component by simply altering the file name postfix separated with the tilde symbol which makes it super useful for testing and demonstrating different states of the component.

Data file structure
Example of different component states

There is a global data file which is accessible throughout the tree of components so it is good idea to put for example global menu items or footer links there. Global data file is normally accessible by this location: pattern-lab/source/_data/data.yml.

Caution: The only data that is used to render a pattern is its own data and global data. If we include or embed component_1 inside component_2 we have to put data for component_1 into the data file of component_2. If you want to avoid that manual work check out this plugin: Data Inheritance Plugin.

CSS: we used SCSS + SSMACSS + BEM combination which gave us a way to isolate CSS through class naming convention. I won’t go into details here but you can take a look on some SCSS tricks to achieve this. I’m also looking forward into CSS Modules for future projects.

JS: is the most complicated part and breaks into several tasks.

Module bundler: Drupal 8 by default provide a very cool way to attach component’s CSS and JS to the page by introducing a library concept which allows to specify different bundles in Drupal config and then include only those of them into the page which are really required. A library definition can look like this:

And then has to be attached to the template by any of the methods described in documentation here.

The problem can rise if accordion.js from our example has its own dependency inside. It’s not a good idea just to put this dependency on the same level in config file cause in this case we have to copy it over everywhere. And if there are several levels of dependencies things become very confusing.

To solve this we used good old Browserify, and Webpack can also be an option. To make it work we just require module inside our JS file and let Browserify handle it for us through a gulp task. That’s how our overlay component javascript file begins:

Another issue is that Pattern Lab knows nothing about the Drupal libraries concept and if we want our components to work inside its sandbox we have to add all same libraries into the pattern-lab/source/_meta/_01-foot.twig file to make sure our JS is accessible on all pages inside the testing environment.

To simplify things in the beginning we just created one big JS bundle and attached it both to Drupal and Pattern Lab. Luckily we were able to keep the bundle relatively small and there were no much room to split JS between pages so our temporary solution became permanent :)

Component interaction: Once we have all components isolated we have to define a way how they communicate to each other.

Imagine we have a header component and a sidebar menu which slides-in any time button is pressed in the header.

We could do the logic right inside the header JS(inside a click event handler) but as soon as we strived to a loosely coupled architecture we decided to go with a simple mediator pattern in place and choose Redux. It’s a pretty popular one and I covered some of its aspects before.

One issue to not forget is that Redux by default calls all store subscribers which we don’t want to happen cause we don’t have Shadow DOM and clever change detection mechanism here like React has, so we used an extension called redux-watch and wrapped our subscribe method in the way that only required reducer is invoked:

If terminology sounds confusing Redux documentation may be helpful.

Drupal behaviors: If you never worked in Drupal projects you have to make yourself familiar with Drupal Behavior concept. In short in Drupal world we cannot rely on any document load events because any time Drupal through AJAX can replace any part of HTML with the fresh version of it and the only way we know about that is through attachBehaviors method call.

So rule of thumb here:

Always wrap your JS code with Drupal.behaviors.yourName object
Behavior example

Though Pattern Lab knows nothing about behaviors and we have to manually attach drupal.js from Drupal core to all of our Pattern Lab pages inside pattern-lab/source/_meta/_01-foot.twig file:

Code for attached drupal.js

Together with ready.js. This is already included in Emulsify Drupal theme which I will cover later so probably you just need to uncomment appropriate lines of code.

Taking everything above is done and working seems we are good to go.

But wait!

2. Decoupling

One of the important aspects of Drupal + Pattern Lab combination is decoupling frontend work from backend meaning that those two teams can work almost independently by having clear separation defined by list of components and their parameters. And here tricky things begin.

Decoupling is achieved by twig namespaces module which gives a way to put all frontend templates in one place and then reference them from Drupal templates folder.

Still sounds good overall.

Forms: So let’s start with the forms. If you think you can make an input component and then in a Pattern Lab template include it inside a form like this :

you are wrong.

The way Drupal handles it is first it renders separately each input component as a field and then provide this rendered HTML as a string parameter to the form. First time we found that out, we created an if clause where for Pattern Lab we used include and for Drupal we used rendered field . We created a global parameter named patternLab in pattern-lab/source/_data/data.yml file to have a way to distinguish if we are inside Pattern Lab or not.

Example of incorrect code

Don’t do this.

First you cannot be sure there is no patternLab variable in Drupal and second there is a cleaner way to do the same which allows to have single version of twig file by moving Pattern Lab data to the place where it is supposed to be, to the YAML file:

Example of correct code — twig
Example of correct code — yml

This is achievable thanks to Data Transform Plugin by aleksip(very cool guy!, follow him if you are into Drupal front end).

Attribute object: My first implementation of input looked like this:

Input — example of incorrect code

Guess what? This is wrong. Drupal uses Attribute object for the forms and form controls to set and manipulate their HTML attributes.

Learn about Attribute object and best practices how to use it here

So the correct way is something like this:

Input — example of correct code

Luckily to make it work in Pattern Lab(thanks again to Data Transform Plugin) we can emulate attributes object inside a data file:


Don’t use create_attribute() method inside Pattern Lab. It is not supported. You should only use it in the Drupal templates folder.

Modules: Drupal allows to quickly build a web application thanks to huge amount of contributed modules.

I am not sure why(may be some modules are not fully migrated to Drupal 8 or it is simply difficult to separate a view for them) but there are cases where you have to deal with whatever HTML you get from the server if you want to keep your fast development pace fast. One of the examples is a search results. If you prepared a template for that you can throw it away. The only way you can style it is overriding CSS for that HTML which is already provided by the module.

Define with your team all modules to be used in the project in advance

Forms, validation messages, shariff, sitemap are good candidates for that. There was a case where I had to ask Drupal team to add a class to the element so I can add a padding in the end. Like it or not we have to deal with it.

Sometimes a module may be picked based on the underlying JS technology. Very popular Simple hierarchical select for example uses Backbonejs inside so be ready to learn it on the way to debug your application.. or ask team to consider alternative one.

Responsive images: Separate attention to Drupal Responsive Images module cause there are high chances it will be used in the application.

Be ready to receive from the server the whole rendered <picture> element instead of image path in your templates.

Translations: Not very Drupal-specific but remember about translations in your twigs. Discuss with the team which of those you’re going to use:

string | t or {% trans %} string {% endtrans %}

and don’t forget about translation context if it is required though it can be fixed later with replace-all IDE functionality.

Debug Drupal: It is forbidden to write about Drupal debugging in the chapter dedicated to decoupling frontend with backend but

learn about kint and locating of Drupal templates

We front end developers may not use these methods ourselves but at least we should know they exist and ask Drupal team to help with debugging when needed.

Themes: Drupal is a big open-source community meaning many things are already in place and done by some clever guys. To avoid reinventing the wheel it is good idea to start with the Drupal theme which already includes Pattern Lab. The most popular are:



We started with Emulsify and I very appreciate the way they organized the gulp tasks and documentation but be careful as they use Pattern Lab Standard Edition for Twig by default which doesn’t include important Drupal plugins namely:

Drupal Twig Components plugin which allows to use Drupal filters and functions in Pattern Lab(say | t filter)

and already mentioned above

Data Transform plugin which makes life much much easier(Attributes object, include inside YML files, etc.)

Particle theme at the same time uses Pattern Lab Twig Standard Edition for Drupal which includes both mentioned plugins by default and some useful methods like npm run new.

Choose the correct Pattern Lab edition.

If you still prefer the way everything organized in Emulsify theme just update file in it to pick the right edition.

3. Pattern Lab UI

Pattern Lab UI is good and easy to use. It allows to view a specific component, group of components or search one if you are not sure how to access it from the menu.

One thing you probably should remember is that

all your controls are tested inside an iframe

If issues or special case arises(print page feature, for example) there is a way to test the component in a separate window outside of iframe:

Menu to open component in a separate window

Pattern Lab UI has a lot of settings but I didn’t find any good documentation for them. Please share in comments if you know one.

Who to follow

There are couple of blogs/repos I would suggest to follow to get some insights:


Pattern Lab indeed is a very nice way to organize and present front end template library and I don’t know any good alternatives for achieving that at the moment taking it is not a headless Drupal project.

Projects like Particle and Emulsify make a strong move towards decoupling the front end and back end work though it is naive to think that having a prior Drupal experience is not a requirement anymore to write high quality front end code in such projects. Knowing how Drupal theming, forms and other popular modules work is still indispensable in my opinion.

Have fun with coding and please let me know about your experience in the comments.

Front end journey into Drupal + Pattern Lab was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

BEST Tech Stories of 2018 (so far)

2018-01-12 18:12:44

This message is brought to you by Codacy. They automate quality standards & code reviews on every commit & pull request so you can ship 2 days earlier on every 2 week sprint. For Hacker Noon readers, they’re offering 15% off using this code: HACKERNOON.

Hello!! The internet has so many best of 2017 articles right now…

At Hacker Noon, we’re forward thinking. Here are the best stories of the year 2018 (so far).

These top stories of January cover security, driverless hotel rooms, cryptocurrency, software development, data science, and more.

🔒 🔓 🔐


I’m harvesting credit card numbers and passwords from your site. Here’s how. by David Gilbertson. “In some wise words from Google: ‘If an attacker successfully injects any code at all, it’s pretty much game over.’ XSS is too small scale, and really well protected against. Chrome Extensions are too locked down. Lucky for me, we live in an age where people install npm packages like they’re popping pain killers.” It’s real, it’s scary, it’s funny, and it’s the internet’s most clapped story of the year so far. There’s some great discussion about it on twitter, hacker news, reddit, and here.

Meltdown and Spectre: what are they and what should I do? by Jonathon Grigg. “The good news is that updates to help mitigate the affects of these vulnerabilities are rolling out now and will continue to do so over the coming weeks. The bad news is that to a certain extent, these attacks exploit the fundamental architecture of modern processors and so are likely to require entirely new hardware to completely fix it.”


Mandatory Read About Humanity’s Inevitable Sludge Toward Driverless Hotel Rooms

Driverless Hotel Rooms: The End of Uber, Airbnb and Human Landlords by Nathan Waters. “In today’s reality we think of hotels as expensive accommodation intended for a few overnight stays. Hotels and Airbnb accommodations are able to charge expensive fees due to their fixed and high-demand locations within the city. By decoupling accommodation and the physical location, we decentralize housing and empower the individual to instantly switch to alternative locations.”

📈 💸

Crypto Life

2018 Will Be The Year Blockchain Technology Goes Mainstream. Here’s Why by Nicolas Cole. “A decade from now we will call this “the blockchain boom…”

Good Alternatives for Bittrex & Binance by Vamshi Vangapally. “When big exchanges with lots of volume shuts the door on new users, what’s the best alternative available for them?”

How to Crush the Crypto Market, Quit Your Job, Move to Paradise and Do Whatever You Want the Rest of Your Life by Daniel Jeffries. “I don’t ask you to believe anything because belief is the death of intelligence. All you need to do is look, listen with an open mind, learn and then decide for yourself. Every single one of these people, four guys and one gal, stressed the need to transcend your limiting belief systems and believe it can be done before you do anything else.”

The Art Of Hodling Crypto: Can’t Make this Sh*t Up by Bruce Hunt. “If you took 5 of the cryptocurrencies that are still in the top 10 from Jan. 1, 2017 and compare them to Jan. 2, 2018 here is what you would have if you invested $1,000 in each on Jan 1st 2017…

  • BTC — $963.00 to $16,460 — total $17,092
  • ETH — $8.26 to $878 — total $106,295
  • XRP — $0.0065 to $2.41 — total $370,769
  • LTC — $4.37 to $255 — total $58,352
  • DASH — $11.26 to $1,160 — total $103,019

…for a grand total of $655,527 for $5,000 invested.”

⚒️ 🌐

Software Development

How I Coded Everyday for 365 Days by Emily Yu. “It’s easy to make excuses. I’ll do it later. I could do that if I tried. But the truth is, you’re not trying, and that “later” will never come. I knew that although I wasn’t the best at time management, if I was going to commit to a long-term resolution like this, I needed to scrap any excuses.”

I thought I understood Open Source. I was wrong by Lorenzo Sciandra. “And something clicked.I think I get it the right way, now: open source doesn’t mean “up for grabs”, but instead ‘Hey, look, I did this — if you want to use it too, here’s how. I did it in a way that would fit my needs, but use it as you like.’ And that’s it.”

Introducing Immer: Immutability the easy way by Michel Weststrate. “Immutable, structurally shared data structures are a great paradigm for storing state. Especially when combined with an event-sourcing architecture. However, there is a cost to pay. In a language like JavaScript where immutability is not built into the language, producing a new state from the previous one is a boring, boiler-platy task”

The constructor is dead, long live the constructor! by Donavon West. “We’ve seen that for setting our initial state, we no longer need a constructor (or any other instance property for that matter). We also don’t need it for binding methods to this. Same for setting initial state from props. And we would most definitely never fetch data in the constructor. Why then would we ever need the constructor in a React component? Well… you don’t.”

Top 66 Developer Resources of the Year by Mitch Pronschinske. “After 8+ years reading and curating developer content, I thought it was about time that I compile an end-of-year list with the scores of resource links that I share on Twitter and Reddit throughout the year.”

Web Scraping Tutorial with Python: Tips and Tricks by Jekaterina Kokatjuhha. “I tried to find out when the best time to buy tickets is, but there was nothing on the Web that helped. I built a small program to automatically collect the data from the web — a so-called scraper. It extracted information for my specific flight destination on predetermined dates and notified me when the price got lower.”

📊 🔬

Data Science

4 Must Have Skills Every Data Scientist Should Learn by SeattleDataGuy

  1. Being Able To Simplify The Complex
  2. Knowing How To Mesh Data Without Primary Keys
  3. Being Able To Prioritize Projects
  4. Being Able To Develop Robust And Optimal Systems

Aspiring Data Scientists! Start to learn Statistics with these 6 books! by Tomi Mester. “The first three are lighter reads. These books are really good for setting your mind to think more numerical, mathematical and statistical. They also present why statistics is exciting (it is!) really well. The second three books are more scientific — with formulas and Python or R codes. Don’t get intimidated though! Mathematics is like LEGO: if you build the small pieces up right, you won’t have trouble with the more complex parts either!


And a Couple of Stories for the Adventurous

Seattle 3 Year Time-lapse Video from the Space Needle by Ricardo Martin Brualla. “Ever since the Seattle’s Space Needle installed an HD 360 webcam on the top of the needle, I have been fascinated by the footage captured. Over the past few months, I put together a time-lapse video of what the 360 webcam captured over the last 3 years. Check it out below, and continue reading for more details about it and to learn how it was made.”

How to be smart in North Korea by Christian Budde Christensen. “Last year, when the world seemed on the brink of a nuclear war, my brother and I went to North Korea. As so many others, we had been exposed to the country almost daily through the news, or documentaries. The stories about concentration camps, mass surveillance, and a crazy leader known to execute his opponents with heavy military equipment were far from anything we had ever experienced as 90’s kids growing up in Scandinavia.”


We’re only 12 days into 2018, so there is a chance that these stories will not remain the top tech stories for the entirety of 2018… but nevertheless, these are — IMHO — some great reads. If you’re looking for some more definitive rankings based on a larger sample size, read our top stories of 2017. If you have a story to publish, lets talk.

Until next time, don’t take the realities of the world for granted.

Kind Regards

David Smooke@AMI

P.S. This message is brought to you by Codacy. They automate quality standards & code reviews on every commit & pull request so you can ship 2 days earlier on every 2 week sprint. For Hacker Noon readers, they’re offering 15% off using this code: HACKERNOON.

BEST Tech Stories of 2018 (so far) was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Finding a Hypotenuse with JavaScript

2018-01-12 17:16:40

Recently, I had the idea for a site which allowed you to brainstorm ideas. Here’s what I had basically envisioned:

A user would start off with a central idea or thought and be able to branch off related ideas or thoughts. This would be great for planning lessons, presentations or even studying. As I was brainstorming this idea, I came up with 4 components that the project would need:

1 An Input/Textarea

2 A button to create a new branch of an idea.

3 A line to visually connect the idea and the branched idea.

To do this, I quickly ran into a problem. Creating an input/textarea is easy. Creating a button that creates a new input/textarea is also pretty easy. The difficult piece is visually designing it in a way that is both functional and makes sense visually. For example, for simplicity’s sake we could simply have each button that creates a new form element place the form element vertically below the previous element. Although this would be simpler from a programming standpoint, visually, it wouldn’t make much sense for the user, as it would be hard to tell which branched text box was connected to which previous idea or text box. As usually serves me well, I decided to start small, and see if I could get the mechanics working on a small scale first. I started with dots, each dot representing a from element / text box. Each dot is 25 pixels in width and 25 pixels in height, black in color. My first goal was to add a new dot when the first dot is clicked, and then distribute subsequent dots around the first dot each time it is clicked. To solve this, I created a variable called “click” and set it to 0; Then, on each click event, I add one.

let clicks = 0;
$('button').click(function() {
clicks = clicks + 1;

Then I create an element inside an if statement.

if(clicks == 1) {
let blackDot = document.createElement('div'); = "outerDiv1";
blackDot.className = "blackDotClass";

That’s the basics of it. Then I add a top and left margin to it. The isn’t included in the class “blackDotClass” because the margin will be different for each created element. For example, the first dot will be to the right of the parent element, the second created dot will be below it, and the third to the left, etc, etc. I’ll insert it like this:

if(clicks == 1) {
let blackDot = document.createElement('div'); = "blackDotID1";
document.getElementById('container').appendChild(blackDot); = "25px"; = "200px";
blackDot.className = "blackDotClass";

Then, if the parent dot is clicked a second time, we could do something like this:

if(clicks == 2) {
let blackDot = document.createElement('div'); = "blackDotID2";
document.getElementById('container').appendChild(blackDot); = "0px"; = "200px";
blackDot.className = "blackDotClass";

The only thing we’ve changed is the “ID” and the top margin. Then for the third element, we would probably change the top and left margin, to place each new element in a circle around the parent dot. This part is simple enough but still, this would be confusing for a user without physical lines that connect one dot to another. Otherwise, again, it would be really difficult to tell which elements are actually connected, with nothing more to go on than spaces.

My initial idea to solve this was to use polygons. Since each “dot” or “element” or whatever we’re using will have a set of x and y coordinates, I could use the coordinate of the parent element and the coordinates to draw a polygon line from one element to the next. Here’s a diagram of what I had envisioned:

I actually went through several iterations of this idea before coming to the conclusion that svg polygons wouldn’t work. The reason is that polygons are an svg element, and have to be inside an svg container. Because of this, you aren’t simply lining a up with b. But you’re also lining them up with the svg element itself, which has it’s own set of dimensions. Take this for example:

You might start the process, and have everything lined up properly, and end up with the above example, where the line doesn’t connect from point a to point b. Naturally, the assumption is that the line isn’t long enough, and is a problem with either the coordinates, or length of the polygon. When you’re problem could be that the svg container isn’t the right size or isn’t aligned properly. What you actually have is this:

You’re line is right, and your coordinates may be right, but because the svg container is too small, you only see a small portion of the actual polygon. Sure, you can make a border around the container to see where it is, but imagine how complex doing this gets when you have several elements, then the svg containers and polygons…it’s a nightmare.

So I came up with a slightly more simple solution. Just a div, with a width of 1 and a border. Each time I click “A” and create a new child element “B”, I also create a third element, “C” a line connecting the two, or a div with a border, between the two elements.

If A and B are on the same X axis, and the display is set to inline, or they’re contained in a span tag, then your job is done, because you don’t need to may any calculations for the y axis. However, again, from the user standpoint, it would be difficult to know where ideas and elements are connected if everything is in a straight line. Thus, in these mind mapping diagrams, they usually tend to be circular in shape. So here’s what I came up with. After I’ve created B, from clicking A, I get the coordinates of each, just like I had done with the polygon.

  let element1 = dot1.getBoundingClientRect();
let element2 = dot2.getBoundingClientRect();

I also want to find the midpoint of my element. In case my dot is 300px large, I don’t want the line to connect to the top, but rather the middle. I do this by dividing the height and width by 2, which is data I can find from my “getboundingClient” function.

let midpointX1 = element1.width/2;
let midpointY1 = element1.height/2;

let midpointX2 = element2.width/2;
let midpointY2 = element2.height/2;

Now, my thought process is this. If I know the x and y coordinates, aslo included in the “getBoudningClient” function, then I can hopefully do some math. What I want to know is the length of the line that would connect both elements, and the angle of the line. I can do this with some trigonometry, First, I’ll find the length with the Pythagorean Theorem: A squared + B squared = C squared.

By turning the relationship of the two elements into the corners of a triangle, we can then use math to discover the length of the line, as I mentioned above, and then we can use the tangent to discover the angle of the line. What I’ll do is create a function that takes the coordinates of both and runs them to find what I’m looking for.

let midpointX1 = element1.width/2;
let midpointY1 = element1.height/2;

let midpointX2 = element2.width/2;
let midpointY2 = element2.height/2;

let top1 = - midpointY1;
let top2 = - midpointY2;
let left1 = element1.left - midpointX1;
let left2 = element2.left - midpointX2;

function findTriangle (w, x, y, z) {

let difference = function (a, b) { return Math.abs(a - b); }
let opposite = difference(w, x);
let adjacent = difference(y, z);

let hypotenuseLengthSquared = Math.pow(opposite, 2) + Math.pow(adjacent, 2);

let hypotenuseLength = Math.sqrt(hypotenuseLengthSquared);

let angle = Math.atan(opposite/adjacent)*100;
return [opposite, adjacent, hypotenuseLength, angle];
let triangle = findTriangle(top1, top2, left1, left2);

The function “findTriangle” takes the element top and left, minus the midpoint, assuming our elements are symmetrical, and gives it basically the x and y coordinates of both elements to calculate the angle and length of the hypotenuse. I also have the function return the adjacent and opposite sides in case I need to use them later as well. Now, I’ll create my div, using those coordinates and returns.

let newDiv = document.createElement('div'); = "test";
document.getElementById('dot1').appendChild(newDiv); = "#1cce3a"; = "3px"; = "solid"; = "#1cce3a"; = ""+triangle[2]+"px"; = "rotate("+triangle[3]+"deg)"; = -1;

Because my return statement is an array, when I call for the width and transform of my element, I’m using only the array indices that I need [2] and [3].

Now, I can run this exact same function inside my second if statement. Since the second dot will appear slightly lower on the DOM than the first, the function will calculate the distance between the two and return the connecting line (div) so that they will be visually connected on the screen, and we can have something similar to what my original vision was. However, even with these precise calculations, things can easily go bad here. For example, if the container is set to a flexbox display, it will throw all of the calculations off. But, all in all, it’s a pretty fun exercise. Feel free to reach out for feedback or questions. Thanks!

Finding a Hypotenuse with JavaScript was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Future: 5 Native Ad Trends in 2018

2018-01-12 16:43:17

The beginning of a new year is a good time to talk about the future trends of native advertising that big brands and startup marketers will try to embrace.

The inception of native advertising on new platforms

Native advertising is now going beyond media and blogs. Social media have provided the opportunity of earning money to active users, not only by incorporated advertising, but also by the effective content integration of various brands’ offers. In contrast, companies are able to reach a large audience without paying the relevant media for this access.

The number of platforms which allow the placement of such advertising is constantly growing. Now, marketing and native promotion tools are not only blogs, Instagram and YouTube.

At the same time, a few years ago, MirriAd, a company dealing with native advertising, led a campaign with Universal Music Group. During this campaign, native advertising was inserted into music artists’ clips.

Today, streaming services like Pandora and Deezer are experimenting with musical native advertising. Musicians can buy ‘listenings’ of their songs — just like buying views in social media. Everything shows that the number of platforms and the ways to display native advertising will only grow.

The death of advertising on the mobile web

More and more advertisers realise that people use smartphones differently to desktop computers. According to the statistics, only 11% of users go to the mobile versions of sites, while most of them (89%) prefer to spend time in apps. The developers feel this trend, so today there are now mobile apps for all occasions — the number of them in popular stores is approximately 5 million.

In-app advertising is much more effective than promotion on the mobile web simply because an app is more convenient for users., According to a MMA study in-app native ads are viewed three times more often than the banners on mobile sites. Analysts at Business Insider stated that the likes, shares and other involvement indexes of in-app advertising are 20–60% higher than mobile banners.

All this leads to the fact that in-app advertising is pushing out mobile web, as stated in the survey of Smaato, advertising platform. According to this company, in 2016 the budget spent on advertising in the mobile web was 19% and 81% was spent on the in-app advertising. This year, the web has had only 6% and the applications, 94%. Next year, the advertising in the mobile web will finally die.

Native video content development

In recent years, the fast-growing advertising channel has been video. And that’s because of the high involvement indexes in comparison to other types of content on different platforms.

Today, there are two basic ways to insert ads into video: pre-rolls and commercial breaks or native integration. The first option is not very popular with the audience — few people like to watch ads before or during the videos just like on TV. In contrast, native integration does not cause such irritation from the audience.

The company Sharethrough completed a survey comparing five advertising campaigns with pre-rolls, to the same number of native campaigns. The results in each case were better when using native ads. In one case, the advertised brand, soft drink Jarritos, significantly increased its brand lift (user interaction with the brand) on native placements, while pre-rolls only gave only a 2.1% increase.

PR role changes

The growing number of channels and the ways of communication also lead to change in the role of internal PR departments. Today, business is able to interact with the audience in different ways and in different situations. At some moments, it may be required to work in crisis management, to extinguish the negativity and to prevent someone writing about an event. PR professionals capable of preventing the publication become more valuable than those “making the placement”. Meanwhile, other situations require different skills like events promotion, support on a new product launch and so on.

A staff PR service can’t do everything at the same time, so outsourcing is becoming more and more popular in dealing with the emerging challenges. For example, the staff’s volume of work may become so large that they won’t be willing to spend time on media relations. They may just use native advertising and publish the relevant content on a fee basis, but they would do it promptly and with a certain standard.

Content marketing: the distribution problem is key

Traditionally, major brands marketers have been used to making large advertising purchases on TV, radio and in print media. This market is well-established, the interaction chains are up and running with specific market players (from agencies to brokers and intermediary networks) responsible for each stage of advertising: from the need for advertising to its appearance in the targeted media. The pricing scheme of advertising solutions is clear and it has not changed for decades.

Sponsored publications in online media are not as simple — there are no uniform standards for such advertising. As a result, the business faces new conditions for every placement. According to Alexander Storozhuk, the founder of the platform, the existing media services are just evolving and they are not always capable of working with the right advertising volumes or maintaining a high level of quality. In the end, brands are forced to directly contact bloggers and online publications for purchase and communication with them can be difficult. There is also the problem of unstable pricing and it is not always clear why a post on one site is a lot more expensive than on a similar one. It complicates the tasks of planning and budget allocations for an ordinary brand.

The situation is gradually changing as the distribution channels and native advertising infrastructure continue to evolve. We can expect new services and the first active industry-specific standards to appear in 2018.

Future: 5 Native Ad Trends in 2018 was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Programmatic emails with API-first providers

2018-01-12 15:34:17

Every e-commerce business sends lots of emails. Be it part of a marketing campaign or payment processing, almost every operation or process is often wrapped up with an email message. However, email server configuration and maintenance is complex; it’s actually a separate field of study. When push comes to shove and the first users come to the platform, you really don’t want to figure out how to keep your email infrastructure up and running. You clearly don’t want to worry about email deliverability either. So, what can we do about it?

‍Photo by Mathyas Kurmann on Unsplash

This is the 5th part of series “Building Online Marketplace From Scratch”. In this part of our series, we’ll learn how to enable your platform to push out emails with API-first email providers. And how to let your non-technical colleagues create and modify email templates without taking up developers’ time.

2 Types Of Emails

The online marketplace we’re building, but also pretty every online company out there, needs 2 types of emails to communicate with users and other stakeholders:

  • Marketing

- Content Promotion & Offers

-Sales Emails & Communication

  • Transactional

- Commerce receipts and shipment notifications

- Account updates

- Password changes

- Order status updates

In the case of an early stage business like Manufaktura, the first category of emails fits with the solutions provided by products like MailChimp or GetResponse. They offer a simple way to roll-out and monitor marketing campaigns run via the email channel. The mechanics of marketing emails are quite common for an early stage startup so, tapping into one of the mentioned products will do. In this article, however, we want to focus on transactional emails.

The type and frequency of transactional emails differ from company to company, from process to process. That’s why we need to find a way to send highly customizable messages in the first place. Additionally, sooner rather than later you’ll learn you also should figure out how to make email templates and content modification approachable for marketers and the customer service team.

The answers to these issues come with the programmatic email service providers. Let’s explore this approach step by step.

API-first email provider

The email service providers (ESP) like SendGrid, Mandrill or SparkPost abstract the email server configuration magic for you. They expose the email functionality behind a simple REST API. All you have to do is authenticate and use call corresponding endpoints as in this snippet:

With the use of this tool, you can forget, or at least put off, worrying about related issues such as:

  • Deliverability rate — ESPs have dedicated teams focused on this matter only
  • Reporting — out of the box you get bounce-, open-, click-rate summaries
  • Bulk send out — sending billions of codes a month rest assured they know how to handle your batch of emails
  • Email filtering — ESPs automatically stop sending emails to bounces, blocked mailboxes, spam reporters or misspelled emails
  • Unsubscribed list management
  • Custom domain setup

But the utmost benefit of using ESP, especially in the early stage, is the fact it’s cheap. You can go through the pricing summary put together by Zapier to see it’s a matter of less than 100 dollars a month, often offered with substantive free quotas on top of that.

Every ESP has its pros and cons, but they mostly offer the same service. When choosing your provider, apart from the free quota, you should also take a look at SDKs quality, other marketing features, UI — all the stuff that influences the speed of onboarding. In the early stage of your platform, you can change the provider pretty easily after all.

In Voucherify we use SES for internal emails, but our clients can connect their Mandrill or SendGrid accounts to send coupon emails on their behalf.

In this tutorial, we’ll give SendGrid a try.

What to look for when sending an email

The email send out is a straightforward thing. As you can see above, it boils down to just a few lines of code. But there are some good practices and also low hanging fruits at the same time; you can introduce an email engine which reduces your headaches in future.


Every SaaS/IaaS provider faces issues someday (even AWS). Therefore, it’s wise to connect some form of fallback provider. The effort isn’t big — just watch the responses you get from the primary ESP and call another one if there’s an error.

Although, this solution will protect you from minor hiccups, sometimes your provider is hit by a massive outage. In this case, it’s reasonable to have your email service configured in a way that allows for a quick change of ESP.

It’s also worth noting that the consequences of random errors from your ESP can be also reduced by implementing a simple retry policy.


In the first days of your emails, when the traffic is low, but every delivered message adds up to the overall customer experience, it makes sense to ensure emails fly as expected. Add you or your team to BCC to get 99.9% confirmation all emails go through nicely.

Test mode

For test purposes, you might not want to utilize your free quota. Consider setting up different accounts for test/dev mode — it can even be a Gmail account. Or, perhaps you might want to skip the email altogether.

Take a look at this example of email service. The code is prepared for introducing the best practices we just mentioned (fallback, bcc, test mode)

Content modification

When you’re starting out and the requirements aren’t set in stone, assume that the email’s design and text will be modified, a lot. You’d better be prepared for that.

The first step to handle frequently-changing content is supported out of the box — you just keep the templates in separate HTML files and change the “options.html” parameter to point to the respective path. The HTML email is then rendered by SendGrid and the send out code remains intact. But let’s take this one step forward.

For the time being, when marketing wants to modify a copy they have to ask you. You don’t want to let them change it themselves because they can break the HTML template. How to by-pass this problem so marketers or customer service folks can manage the content — like they do in MailChimp for example?

Luckily, we have a cheap and easy to use solution. With the help of Contentful (another API-first tool, it’s a headless CMS) and our open source application, you can make the email templates edition available to copywriters. This is how it works:

  • Copywriters create/edit email copies in Contentful editor. They do this in so-called “draft mode”. It’s only about text — they can’t modify the HTML template in any way.
  • Before they actually push the message out to production, they can preview the final version of the email. This is achieved by visiting Contentful-emails web app, which renders a copy from Contentful based on the current HTML template.
  • If all is fine, the copy goes to production.
  • In case they want to update any copy, they just change the status to draft and experiment again, meanwhile the old version still works fine on production.

You can find the full description of this tool in this article.

When to send emails

When it comes to online marketplaces, there are dozens of situations when you must or should send an email. We can’t list them here because it’s too business-specific. However, you can assume some emails will be 100% sent when an order changes its status. As you remember from our previous articles, at Manufaktura orders are managed by Salesforce. Now, in our last post we’ve shown that every time the order status is changed, we notify (HTTP request) our external application about this fact. Then, the application handles this callout and this is a good place to put the email send out functionality, see the example for pending-to-start status update:

This structure is easily extendable, you just need to add another callout handler for each status in the application.

Bonus — deliverability issue solved

We have a story to share. For one of our projects, the email deliverability was an utmost priority. If the user hasn’t been onboarded correctly, the business would be severely hurt. We used Mandrill for our emails. Although it was properly configured from the outset (confirmed with the Mandrill support), some users didn’t get the invitations for example. In fact, it’s not that they didn’t receive it. They landed in SPAM, they weren’t delivered at all. To make matters worse, there were no rules for this kind of incident. Both time and the affected domain were just random.

We suspected that the problem occurred because we had used Mandrill Shared Email servers (one IP used by multiple users) and this configuration might decrease our rating when it comes to SPAM. But this was a very early stage and we hadn’t spammed.

We decided to switch to SendGrid but it didn’t help. As a last resort, we decided to buy a dedicated IP in Mandrill and it worked like a charm. But the funny thing is that Mandrill support team didn’t encourage this because in their eyes it would decrease our deliverability for a small-volume user, as we were.

Also, the problem was hard to spot in the first place because Mandrill deliverability report doesn’t show if the mail is delivered but only that it’s been sent successfully. We had to dig deeper in the UI to find the list of SMTP events and go through the forest of details to figure out if messages had hit the target.


Our online marketplace — Manufaktura — has been equipped with a powerful communication channel. We approached this feature according to rules we set out in the first article — by focusing on speed and maintainability. Things our software needs in the frequently changing business environment.

The API-first email service providers allowed us to build an email machine with just a couple of lines and the Contentful-emails application helped the marketing and customer service team iterate on the content without bothering developers.

Now, when we have emails working we can go to payments!

Originally published at

Programmatic emails with API-first providers was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Dynamic (but limited) Page Size with Laravel Pagination

2018-01-12 15:08:47

Let users decide, but don’t let users abuse.

Laravel makes pagination an extremely simple process, you simply call paginate() on Eloquent and be done with it. Today I received a feature request where I had to allow the front-end team to decide the size of the page. It’s also a simple task, you just give it as a parameter paginate($size);. Something like this:

Sometimes this is all you need because your business model can tell you if that entity might grow indefinitely. For this case, we charge per user and it’s very unlikely that a company will be buying 1 million users. However, I know that in the next few weeks I’ll be working on some other APIs that do have millions of records, which means I cannot allow a page size bigger than a reasonable amount.

Great. Now I can make sure that the API I’m exposing on the internet will not degrade our services if someone set page_size to 1 billion. But it still doesn’t feel awesome because I have to remember to set this up every time I’m writing a paginatable endpoint.

Paginatable Trait

Behind the scenes, Laravel defaults to getPerPage on the Eloquent Model in case a specific page size is not provided. By default, Laravel ships with 15 per page. With a paginatable trait, we can override that method and implement it on models that should allow the Requester to tell us how many items per page they want while validating if they’re not requesting too much.

Just use the trait on your model and the Controller can be a slim paginate() call. It’s a really simple and small thing, but quite powerful and convenient if you’re expecting to write these kind of public APIs.

Dynamic (but limited) Page Size with Laravel Pagination was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Crypto Regulatory Databases: A cost-effective way to manage ICO regulatory risks?

2018-01-12 14:59:40

Crypto Regulatory Databases — a cost-effective way of managing regulatory risks?

In our experience, limited up-front funding and difficulties with identifying a vast range of regulations in numerous jurisdictions make compliance especially challenging for crypto businesses. Non-compliance can jeopardise a crypto project, and have serious implications for its founders and team.

Knowing how to prioritise a vast range of regulatory considerations is key, combined with an overall understanding of the policy approach countries are taking.

Inspired by the World Bank’s ‘Easy of Doing Business’ rankings, Lupercal is among the first to pioneer a comprehensive database on cryptocurrency regulation. Our Crypto RDB provides you with the tools you need to cost-effectively identify regulatory considerations, and prioritise where action is required.

This article explains why using regulatory databases can be the best approach to regulatory risk management for small to medium-sized crypto businesses, early-stage ICO projects, and to keep on-top of rapid regulatory developments.


The cryptocurrency industry is experiencing an extraordinary period of growth. The price of many cryptocurrencies is at all time highs, and more and more entities are looking to run an Initial Coin Offering or Token Generation Event (which we’ll collectively refer to as an ICO in this article).

The ICO boom has attracted the attention of regulators around the world — including recent cautionary messages from the US SEC.

Regulation of the Cryptocurrency Industry

Complying with regulation is becoming increasingly important consideration for all cryptocurrency businesses, as there is an increasingly real risk that regulatory investigations could be just around the corner.

Being non-compliant could carry significant penalties that, in extreme cases, can put an end to a crypto business.

Most attention so far has been focused on ICOs, but regulation has a significant impact all types of crypto business: post-ICO projects, cryptocurrency exchanges, and entities looking to adopt cryptocurrency into their existing business. Despite the early days image of crypto thinking of itself as outside the law, failing to comply with any regulations is not a commercially-viable option.

Regulation’s Market Impact

Regulation continues to have an extremely significant impact on the market, as highlighted recently by the rapid panic-selling following the 11 January South Korean proposals to ban crypto exchanges. Successful ICOs have a keen understanding of optimising their platform in the crypto market — staying up to date with regulatory changes is key to doing this.

The Regulatory Problem for Crypto Businesses

Crypto businesses typically operate internationally. Usually, as businesses grow, so does their international reach — they may start in South Korea and, as their revenue grows, gradually expand their supply chain to include China and Malaysia, or their sales to Japan and Singapore, for example. This means that they can prioritise what regulatory advice is needed as their business develops.

The international nature of crypto businesses doesn’t give them this luxury. They often have to consider all jurisdictions’ laws at once (potentially before any revenue has been raised). This can be overwhelmingly complex, and potentially very expensive.

Why is it so Difficult?

It is widely known in the industry that the US approach to securities law for ICOs can be problematic. And that China’s approach to crypto regulations generally can make it difficult to operate it. And that anti-money laundering regulations are important to consider.

But this is just the tip of the regulatory iceberg. What are the securities law considerations in the UK? Or the EU? Or Hong Kong? Or Australia?

If the tokens aren’t securities, do other financial/corporations law regulations apply?

Are there options for protecting intellectual property?

What consumer law considerations could come into play?

And what is the general approach of a country’s governments —this policy approach will likely inform the regulations of the future.

The answer in one country may be completely different in another country. The risks of not addressing certain regulations can vary dramatically. And to make matters worse, most countries have yet to issue public guidance on how they will apply existing regulations to cryptocurrency.

The Answer — Regulatory Databases

The key to successfully managing regulatory risks is knowledge and prioritisation.

For smaller projects, engaging specialised advisors to address all potential regulatory issues can be costly and potentially unnecessary. But missing particular issues can have major consequences for the project’s future.

Regulatory databases — like Lupercal’s CryptoRDB — provide the tools crypto businesses/ICOs need to help identify:

  • a check-list of key regulatory considerations;
  • an outline of what regulatory issues could arise;
  • which jurisdictions are easier and harder to carry out crypto projects in (and which jurisdictions to avoid);
  • the policy approach of governments (long-term) and regulators (short term).

Regulatory databases therefore provide you with the tools you need to identify when advice may be required, to cost-effectively oversee certain regulatory issues in-house, or to go to advisors with a better understanding of what’s needed.

Check our Lupercal Capital’s CryptoRDB — the world’s leading database for cryptocurrency regulation.

Subscribe now for major discounts.

Alternatively, go to to learn more about our tailored crypto/ICO consulting & project management services.

Crypto Regulatory Databases: A cost-effective way to manage ICO regulatory risks? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more

Understanding The React Source Code IV

2018-01-12 14:58:38

Understanding The React Source Code — Initial Rendering (Class Component) IV

Photo by Joshua Sortino on Unsplash

We have completed the rendering process of a simple component. This time we are going to explore more ramifications of this process by discussing how a class component (a typical one we might use in everyday development) is rendered.

Files used in this article:

the same as post one and two

I use {} to reference the previous post if the methods (or logic process) has been discussed in it.

The component named App is similar to what I gave in the beginning of post one. But since we have leveled-up a bit, it does not look that daunting anymore.

import React, { Component } from ‘react’;
import logo from ‘./logo.svg’;
import ‘./App.css’;
class App extends Component {
constructor(props) {
this.state = {
desc: 'start',
render() {
return (
<div className="App">
<div className="App-header">
<img src="main.jpg" className="App-logo" alt="logo" />
<h1> "Welcom to React" </h1>
<p className="App-intro">
{ this.state.desc }
export default App;

As mentioned, the component above is rendered using:

<App />,

Now the babeled code:

import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
class App extends Component {
constructor(props) {
this.state = {
desc: 'start',
  render() {
return React.createElement(
{ className: 'App' },
{ className: 'App-header' },
{ src: "main.jpg", className: 'App-logo', alt: 'logo' }
' "Welcom to React" '
{ className: 'App-intro' },
export default App;
ReactDOM.render(React.createElement(App, null), document.getElementById('root'));

Here we consider Component a common base class, as other methods will not be used in this post.

This time we can fast forward the logic that is shared with simple component.

Construct the top level wrapper `ReactCompositeComponent[T]`

The designated data structure:

This step is almost the same as that in simple component rendering, so I will give a brief description only, it

1) creates ReactElement[1] using ReactElement.createElement(type, config, children)( This time App is passed to type, and config, children are null);

2. creates ReactElement[2] in _renderSubtreeIntoContainer();

3. create the designated wrapper with instantiateReactComponent().

ReactElement.createElement(type,    // scr: -------------> App
config, // scr: -------------> null
children // scr: -------------> null
) // scr: ------------------------------------------------------> 1)
|=ReactMount.render(nextElement, container, callback)
parentComponent, // scr: ----> null
nextElement, // scr: ----> ReactElement[1]
container, // scr: ----> document.getElementById(‘root’)
callback’ // scr: ----> undefined
) // scr: ------------------------------------------------------> 2)
|-instantiateReactComponent( // scr: -------------------------> 3)
node, // scr: ------> ReactElement[2]
shouldHaveDebugID /* false */
element // scr: ------> ReactElement[2]
|=ReactCompositeComponent.construct(element /* same */)

This is what we covered in {post one}.

Initialize `ReactCompositeComponent[T]`

The designated data structure:

The step is the same as well:

1) ReactDOMContainerInfo[ins] represents the container DOM element, document.getElementById(‘root’);

2) TopLevelWrapper is instantiated (TopLevelWrapper[ins]) and is set to ReactCompositeComponent[T]._currentElement alongside the initialization of other properties;

3) Again, mountComponentIntoNode is the cross point of upper and lower half, within which ReactCompositeComponent[T].mountComponent returns a complete DOMLazyTree that can be used by ReactMount._mountImageIntoNode, a method from lower half.

ReactDOM.render                                           ___
|=ReactMount.render(nextElement, container, callback) |
|=ReactMount._renderSubtreeIntoContainer() |
|-ReactMount._renderNewRootComponent() |
|-instantiateReactComponent() |
|~batchedMountComponentIntoNode() upper half
|~mountComponentIntoNode() (platform agnostic)
|-ReactReconciler.mountComponent() // scr-----> 1) |
|-ReactCompositeComponent[T].mountComponent() scr:> 2)3)
... _|_
... lower half
|-_mountImageIntoNode() (HTML DOM specific)

This is what we covered in the first part of post two.

Except for some small differences in regard to argument values, the the top level wrapper related operations are exactly the same as what we discussed in previous posts. After those operations complete, we came to the first ramification that is specific to class component.

`ReactCompositeComponent[T].performInitialMount()` — create a `ReactCompositeComponent` from `ReactElement[1]`

This step strips the wrapper and creates another ReactCompositeComponent instance to reflect App component.

The designated data structure:

The call stack in action:

|~mountComponentIntoNode() |
|-ReactReconciler.mountComponent() |
|-ReactCompositeComponent[T].mountComponent() |
/* we are here */ |
|-ReactCompositeComponent[T].performInitialMount( |
renderedElement, // scr: -------> undefined |
hostParent, // scr: -------> null upper half
hostContainerInfo, // scr: -------> | ReactDOMContainerInfo[ins] |
transaction, // scr: -------> not of interest |
context, // scr: -------> not of interest |
) |

The process is very similar to the performInitialMount() in {post two}. The only difference here is that based on the type of ReactElement[1], _instantiateReactComponent creates a ReactCompositeComponent for the class component (App) instead of a ReactDOMComponent. To put it briefly:

1) it calls _renderValidatedComponent() which in turn calls TopLevelWrapper.render() to extract ReactElement[1]; 2) it instantiates a ReactCompositeComponent with _instantiateReactComponent (we name the object ReactCompositeComponent[ins]); and 3) it calls ReactCompositeComponent[ins].mountComponent (recursively) through ReactReconciler, and move on to the next step.

performInitialMount: function (
var inst = this._instance;
  if (inst.componentWillMount) {
... // scr: we did not define componentWillMount() in App
// If not a stateless component, we now render
if (renderedElement === undefined) {
renderedElement = this._renderValidatedComponent(); // scr: > 1)
  var nodeType = ReactNodeTypes.getType(renderedElement); // scr: -> the type is ReactNodeTypes.Composite this time
  this._renderedNodeType = nodeType;
  var child = this._instantiateReactComponent(renderedElement, nodeType !== ReactNodeTypes.EMPTY /* shouldHaveDebugID */
); // scr: ----------------------------------------------> 2)
  this._renderedComponent = child;
  var markup = ReactReconciler.mountComponent(child, transaction, hostParent, hostContainerInfo, this._processChildContext(context), debugID); // scr: ----------------------------------------------> 3)
...// scr: DEV code
  return markup;

`ReactCompositeComponent[1].mountComponent()` — initialize `ReactCompositeComponent[1]`

The designated data structure:

The call stack in action:

|~mountComponentIntoNode() |
|-ReactReconciler.mountComponent() |
|-ReactCompositeComponent[T].mountComponent() |
|-ReactCompositeComponent[T].performInitialMount() upper half
|-ReactReconciler.mountComponent() |
/* we are here */ |
|-ReactCompositeComponent[1].mountComponent(same) |

Same as in ReactCompositeComponent[T].mountComponent() {post two}, the most important task of this step is to instantiate App with ReactCompositeComponent[ins]._currentElement (ReactElement[1]).

The line in the method that does the job is:

var inst = this._constructComponent(

in which the constructor of App gets called.

constructor(props) {
this.state = {
desc: 'start',
// copied from the beginning of this text

Then (we name it) App[ins] is set to ReactCompositeComponent[ins]._instance and a back-link is also created through ReactInstanceMap.

Other operations includes: 1) App[ins].props reference ReactElement[1].props; and 2) ReactCompositeComponent[ins]._mountOrder is 2 due to the ++ operating on the global variable nextMountID.

It is important to note that App[ins].render() is another App method we define in the beginning. Unlike TopLevelWrapper[ins].render() that returns a concrete ReactElement instance, App[ins].render() relies on React.createElement() at the time when it is invoked. We will revisit this method soon.

Since this step is very similar to that initializes the ReactCompositeComponent[T] {post two}, we do not further examine the workhorse method (i.e., mountComponent()).

`ReactCompositeComponent[ins].performInitialMount()` — create a `ReactDOMComponent`

|~mountComponentIntoNode() |
|-ReactReconciler.mountComponent() |
|-ReactCompositeComponent[T].mountComponent() |
|-ReactCompositeComponent[T].performInitialMount() upper half
|-ReactReconciler.mountComponent() |
/* we are here */ |
|-ReactCompositeComponent[1].mountComponent() |
|-this.performInitialMount() |
|-this._renderValidatedComponent() |
|-instantiateReactComponent() _|_
|-ReactDOMComponent[6].mountComponent() lower half

Before the a ReactDOMComponent (we know that this is the class that handle DOM operations) can be created, the ReactElements defined within App[ins] needs to be extracted. To do so, App[ins].render() is called by the following line (in _renderValidatedComponent()) {post two}

renderedElement = this._renderValidatedComponent();

Then App[ins].render() triggers

The cascading calls of React.createElement()

To understand how the ReactElement tree is established, let’s first revisit the App.render() implementation:

render() {
return React.createElement( // scr: -----------> 5)
{ className: 'App' },
React.createElement( // scr: -----------> 3)
{ className: 'App-header' },
React.createElement( // scr: -----------> 1)
{ src: "main.jpg", className: 'App-logo', alt: 'logo' }
React.createElement( // scr: -----------> 2)
' "Welcom to React" '
React.createElement( // scr: -----------> 4)
{ className: 'App-intro' },
// copied from the beginning of this text

In this code snippet I also give the call order of createElement()s which follows a very simple principle: arguments should be resolved (with createElement()) from left to right before a function (of createElement()) gets called.

Then we can examine the creation of each ReactElement {post one}.

React.createElement( // scr: --------------------------------> 1)
{ src: "main.jpg", className: ‘App-logo’, alt: ‘logo’ }

creates ReactElement[2]:

; and

React.createElement( // scr: --------------------------------> 2)
‘Welcome to React’

creates ReactElement[3]:

(Now the two arguments for 3) are resolved.)

; and

React.createElement(                // scr: -----------> 3)

creates ReactElement[4]:

; and

React.createElement(                // scr: -----------> 4)
{ className: 'App-intro' },

creates ReactElement[5]:

(Now the arguments for 5) are resolved.)

; and

return React.createElement(           // scr: -----------> 5)
{ className: 'App' },

creates ReactElement[6]:


Combined together we got the element tree referenced by renderedElement:

`ReactCompositeComponent[ins]._instantiateReactComponent()` — Create `ReactDOMComponent[6]`

The designated data structure:

Then the element tree is used to create the ReactDOMComponent[6] by the following line (within _instantiateReactComponent()) {post two}

var child = this._instantiateReactComponent(
nodeType !== ReactNodeTypes.EMPTY /* shouldHaveDebugID */,

Now ReactReconciler.mountComponent() calls the mountComponent() of the ReactDOMComponent[6] and the logic processes to the lower half.

to be continued…

Originally published at

Understanding The React Source Code IV was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read more