How Far Away Are We From a Completely Cashless Society?

Photo by Sharon McCutcheon on Unsplash

COVID19 has (hopefully) shown us just how dangerous the unseen can be. It’s also made us much more aware of the potential effects of a cough, a handshake, an itchy nose, and even a ten pound note or a pound coin and their ability to spread viruses and other nasty things. There was even a rush to disinfect cash in Asia after the initial outbreak of COVID19. Some banknotes last up to fifteen years, and research shows are passed between up around 600 people every 3 years. 

In recent years, use of digital currency has boomed – we’ve seen the rise of cryptocurrencies like bitcoin and ethereum, then a much needed, mature step toward stable cryptocurrencies backed by assets in a real bank – or backed by other cryptocurrencies or materials like gold, and now we’ve even seen full-fledged bills to introduce Central Bank Digital Currencies – also known as CBDCs. This week the Coronavirus Stimulus Bill in the US introduced the idea of infrastructure for a central bank issued digital currency, likely an indirect response to China doing the exact same thing, but ahead of the curve. Coronavirus seems to be speeding up what was thought to be on track anyway, but in a more distant future.

In China, laws are already being drafted so it can begin issuance of its digital currency. Though China is largely a cashless society anyway with nearly 50% of all payments being done by mobile. The only people who seem to be still using cash are in rural China – which the government aims to focus on adjusting in the coming year.

Sweden also aims to go completely cashless in as little as four years. Though there is currently no talk of a digital currency issued by the state – cash is no longer king in Sweden either. 

Programmable Money – “If This Then That”

At its core, cryptocurrency offers the potential for programmable money. It allows rules to be implemented into transactions through smart contracts. This is best looked at through the popular phrase “If this, then that” – two parties pre determine terms and if/when requirements are met, the contract will be executed automatically. This is particularly useful in insurance policies or financial transactions that require an escrow. Code can now control money for the first time, and this allows much easier creation of new financial tools and levers.

So not only is our society becoming cashless, but also programmable too – less administrative tasks, less errors, and most importantly increased flexibility, security and control over our money. 

The coming decade should see a rapid migration towards these concepts, and the countries experimenting the most with them will have the ability to do things with money that boggle the mind and streamline human life significantly. 

No tags for this post.

Related posts

Ten Years of WFH Lessons

Suddenly become a home worker? Then you might be wondering how to do it right. At Heroku, more than half our team works remotely from home offices, cafes, and co-working spaces. Many of the questions you have right now are ones that we’ve been thinking about for years.

So, here are four tips from the Heroku team on how to make distributed work effective, fulfilling, and fun.

1. Ritual matters

If you work in an office, each day has a similar cadence. You leave home, commute, maybe grab a coffee, greet the same people on the way to your desk, and ease yourself into the day.

When you work from home, ritual is just as important. Maybe more so. Up until now, your home has been a place where you hang out with friends and family, do chores, sleep. To be productive, you’ll need a way to signal to yourself that, for a few hours, home is also where you focus on work.

There’s the obvious stuff. Shower, get dressed. But perhaps less obvious is the need to demarcate your day and your space. 

It might be tempting to review a pull request over breakfast but you’ll quickly burn out unless you stick to predictable working hours. Start and end your work at the times you would if you were in the office. Get up and take breaks. Eat lunch away from your screen.

Similarly, you need a dedicated space for work. If you don’t have the luxury of a dedicated desk, then choose a spot that you can clear during your work hours and that is free of distractions; or as free from distractions as possible. If you can work behind a closed door, all the better to signal “I’m working” to yourself and anyone else at home.

So, tip number 1 is to build a ritual that bookends your working day.

2. Be intentional about communication

In an office setting, much communication is unintentional.

Think about it. Simply being sat at a desk in an office is a form of communication. You’re saying, “I’m working, I’m here, I’m productive”. Then there are the quick chats in the kitchen in which big decisions are made or the meetings where someone’s raised eyebrow says a whole lot more than their words.

Communication in a distributed team takes more work but, done well, can lead to deeper understanding.

Let’s get the basics out of the way first. You need to communicate availability to your colleagues. Say “hi” when you start work, use Slack statuses to indicate when you want to focus, wave a cheery “goodbye” when you leave for the day.

The harder part is understanding each other, staying informed, and collaborating. That requires a change in communication pace and culture. Remote communication is lower bandwidth than in person communication. Yet your brain, pattern matching monster that it is, will fill in the gaps. Rather than make assumptions, ask questions.

If something seems stupid, assume you misunderstood then ask questions until you understand what the other person intended. When you have something to say, think about how to avoid ambiguity. 

Tip number two is to be intentional in how you communicate.

3. Get comfortable with asynchronous work

If an office is a monolith then a successful distributed team is like a set of microservices. You don’t need to know when your colleagues do their work or even how they do it. The important thing is the output and the expectations set.

This is not something you can control alone. Really, it’s largely down to the culture led by your manager. But a successful remote team balances leaving people to get on with their work versus explicit times of synchronous communication. Officer workers complain about meetings but for remote workers they’re a rare opportunity for high bandwidth communication. Need help working something out? Have a video call with a colleague. Want to keep the team up to date with important news? Schedule a weekly call.

No successful distributed team lead is hovering over their reports’ virtual shoulders.

Tip three is to accept that the work is what’s important. How and when it happens, within explicitly set expectations, is irrelevant.

4. Go out of your way to be a team

One downside of distributed working is that you could spend eight hours at your desk doing nothing other than work. All those little social interactions –– signing the card for Karen from accounting, catching up over the latest Better Call Saul episode, politely refusing Nigel’s overbaked cake –– happen less easily.

Not only does that make loneliness more of a problem but it’s harder for a team to bond. Rather than leave it to chance, in a remote environment you must go out of your way to interact with your team.

Schedule time for a coffee over a video chat. Sounds weird? Well, it’s not really about the coffee. It’s a chance to switch off for a few minutes and chat casually with a colleague. If you have a daily stand-up, extend the Friday edition by ten minutes so everyone can chat about their weekend plans. Okay, sure, right now their weekend plans are largely going to be variations on “having a quiet one indoors” but it’s the opportunity to be human with each other that is important.

Tip four is to make sure that it’s not all just about work. Take time to hang out with your colleagues, even if it is by video chat.

Working from home is great, if you get it right

You’re used to working and you’re used to being at home. So, combining the two should be easy, right? 

In our experience at Heroku, it turns out that it takes effort to work from home in a way that looks after your wellbeing while also helping you to be productive. Effective distributed working is also a joint effort. While you must make adjustments on your side, as someone working from home, a company’s leadership must instill a culture and set processes that set remote workers up for success.

Many people working from home for the first time right now are not doing so through choice. Both managers and team members have been thrown into a situation they weren’t expecting. That makes support from leadership, and a company-wide understanding of both the challenges and triumphs of distributed working, even more important.

In this unusual time, working from home is probably just one of the many changes you’re getting used to. However, rather than being an uncomfortable change, working from home can be an opportunity to get more done and spend less time commuting.

No tags for this post.

Related posts

Tech Must Disrupt the Mental Health Hotline Industry

Moreover, we are living at a time where more people are at distress due to the coronavirus shutting down businesses, companies laying people off while beginning their hiring freeze, founders facing increasing difficulty in raising their rounds, a financial upheaval for those who held a lot of equity, internship programs being shut down and increasing social isolation at an unprecedented scale.

This raises an important question — until when will we stall the development of apps for those who are desperately in need?

Here are some statistics to consider:

1 in 5 U.S. adults experience mental illness in a given year.1 in 6 U.S. youth aged 6–17 experienced a mental health disorder1 in 25 U.S. adults experience serious mental illness each yearSuicide is the second leading cause of death in the U.S. for people aged 10–34.Suicide is the 10th leading cause of death in the U.S.

56.4% of Americans (or over 24 million people) with a mental illness receive no treatment for their conditionReasons range from not having insurance coverage, mental health care not being covered by insurance, or the high costs of care.Over 9.8 million adults in the US reported having serious suicidal thoughts, which is about 200,000 more people than the previous year

Suicide rates among people 15 to 64 rose from 10.5 per 100,000 people in 1999 to 14 per 100,000 in 2017 (33% increase).

2X more likely to suffer from depression6X more likely to suffer from ADHD3X more likely to suffer from substance abuse10X more likely to suffer from bipolar disorder50% more likely to report having a mental health condition72% of entrepreneurs surveyed self-reported mental health concerns

1 in 5 college students has weighed suicide (responses from 67,000 college students from more than 100 American institutions).

Firstly, we must work towards destigmatizing suicidal thoughts. Often, suicides are left with no notes because even until the end, people likely feel that they are stuck in the environment of fear and silence. 

However, one week is likely too short to get the reach it needs, which is why you may have never even known that such a week existed. How could we expect people to feel comfortable opening up within a 7-day window on a topic that is heavily stigmatized?

The fact that only two options exist for those who are most vulnerable makes no sense, especially considering the fact that so many people are endangered. 

Lastly, we must provide tools and resources that are easily accessible for people with these conditions. As a team lead at HackMentalHealth, I am actively seeking solutions with other team leads that could ease this access. If you would like to contact us, whether that is for more resources or partnerships, please message us here.

Although suicides look like an insurmountable problem right now, I believe that a collaborative effort could lead to a decline in suicides. 

What makes this problem different from other problems is that it is statistically likely that we have a person around us who have suffered losses or lost so much control over their life that nothing makes them happy anymore.

Having worked with a countless number of mental health patients as a researcher for the lab for youth mental health, occupational therapy school and as a blue dot regional lead during my time at one of the big tech companies, I got to see that the problem of mental health hunts us ubiquitously. 

I hope that we could one day live in a world where there are as many therapy apps for suicide patients as we do with meditation apps for the general public.

The reason why so many meditation apps were able to succeed was that there was no one-meditation-fits-all solution.

As such, we shouldn’t assume that such a solution exists for those with suicidal thoughts either. Technology should look to empower users to do what they couldn’t do through technology and I look forward to seeing how technology looks to disrupt therapies for those with suicidal thoughts.

No tags for this post.

Related posts

Essential Algorithms: The Quick Sort

The Quick Sort is an interesting algorithm and a favorite among software engineers, with some unique advantages and quirks worth looking into. Quick Sort can be highly efficient, often outperforming Merge Sort, although certain cases can make it behave slowly like Bubble Sort. As always, we’ll jump in first with a broad-strokes overview of how this particular algorithm works before exploring the finer points about why it behaves the way it does.

The Quick Sort: An Overview

We’ll start with an unsorted array:

arr = [9,7,4,2,3,6,8,5,1]

The Quick Sort works by selecting an item from somewhere inside of the array, and comparing all of the items to that one. We’ll call this item our pivot. When an array is sorted, everything to the left of our pivot will be smaller than the pivot, and everything to the right of it will be larger. Quick Sort makes it’s way from the ends of the unsorted array towards middle. When it finds an item on the left that should be on the right, and then also identifies an item on the right that should be on the left, it swaps these two items.

You can think of the part of the array on the left of the pivot and the part of the array on the right of the pivot as their own sub-arrays. For now, we’ll treat them as their own, distinctive sub-arrays, and then recursively apply the algorithm to each sub-array. This recursive division and comparison scheme is the same divide-and-conquer approach that Merge Sort takes, and thus the parallels here make it easy to see why it takes O(n*log(n)) time on average.
To illustrate this point and analyze how this works with a divide-and-conquer implementation, we will select the element as close to the middle of the array as possible. In the first iteration of the algorithm, we’ll select number

3

, in the middle, as the pivot. With our pivot selected, this is what our sub-arrays look like before we get started:

So, how do we efficiently sort these 2 sub-arrays around the pivot? We can simply iterate over the arrays to see if anything in the right side is smaller than the pivot, and move them to the left side, and vice versa. If we iterated over the left and right sides, moving the appropriate items, we’d eventually wind up with one array that belongs on it’s side of the pivot, and an array with other, unsorted elements. We would need to iterate over the rest of the unsorted array, and push the items that belong in the other array onto the other array.

end up with a pair of arrays that looks like this:

Now, we know that everything in the left array belongs left of the pivot, and everything in the right array belongs right of the pivot. We can now recursively apply this logic to all of these sub-arrays until each item is sorted. At least, that’s how the divide-and-conquer approach works.

The actual quick sort algorithm doesn’t break anything up into smaller sub-arrays. Here, the act of dividing the arrays into pairs recursively, before performing comparisons, is used merely to illustrate intuitively why it’s average complexity is O(n*log(n)), which we’ll explore more later.

Time and Space

While we’ve discussed time complexity quite a bit in previous installations, one thing we have yet to discuss with similar fervor is space complexity. The Quick Sort algorithm, when done well, does not actually recursively divide sub-arrays that get fed into itself. Before jumping into what it does instead, let’s look at why it doesn’t do this. We can refer back to our Python code for Merge Sort from one of the previous parts of this series:

def merge_sort(unsorted): if len(unsorted) > 1: mid = len(unsorted)//2 left = merge_sort(unsorted[:mid]) right = merge_sort(unsorted[mid:]) result = merge(left, right) return result else: return unsorted

Here, we can start analyzing how it uses space. It takes an unsorted array, and allocates two more arrays, each at half of the size of the array it was passed. It then feeds both of these arrays into the same function, which again, allocates space for 2 more arrays, recursively. So, for example, let’s take an array with 8 elements. In the first iteration, we always allocate n/2 space for a new array, going down the entire left side before recursively moving back up and working into the right side. The exact space complexity here isn’t important, what is important to understand is that it requires additional space, and allocating and deallocating memory for these operations effects performance.

Rather than allocating additional space to hold sub-arrays being worked on, a function can be passed only the indices that outline the sub array being worked on on the original array. This allows an array to be sorted by performing operations directly on the actual array, and is called Sorting In Place.

Sorting In Place

Sorting in place has the advantage of taking up only O(1) extra space. Say your function for Quick Sort only has 3 variables: the pivot, left, and right side boundaries. If you’re writing in C, that means each function call only has to allocate space for 3 variables, which are probably only going to be 4 byte unsigned ints, or a total of 12 bytes. It doesn’t matter if the array being passed to it is 40 items for 40,000,000, it still only needs to allocate 12 bytes when it gets called. That is why it’s considered to have an O(1) space complexity when sorting in place, the amount of space needed is constant and doesn’t grow.

In the earlier overview, the algorithm was explained as manually iterating over the sub arrays and merely moving items around after comparing them to the pivot. Doing this in place requires a slightly different approach to do the same thing. Consider our original unsorted array

arr

,

[9,7,4,2,3,6,8,5,1]

. With 9 items, if we selected the middle item,

arr[4]

, our pivot would be

3

. Instead of making a separate set of left and right arrays, we’ll sort in place by making a

left index

and a

right index

, that will begin on the left and right boundaries of our array.

We start with the left item and compare it to our pivot. If the item to the left is less than the pivot, that is to say, the item pointed to by the left pivot belongs to the left of the pivot, and we move the

left index

forward by one and compare that number. We keep moving the

left index

forward until we find an item that doesn’t belong to the left of the pivot. When we find such an item, we stop the

left index

and begin comparing the

right index

to the pivot. When an item to the left belongs on the right, and an item on the right belongs on the left, the two items are swapped. Since the first item the

left index

looks at,

arr[0]

, is

9

, which belongs to the right of the pivot, we start with

arr[0]

. Since the first item

right index

looks at,

arr[8]

, is

1

, which belongs to the left of the pivot, both of these items switch places. After the switch, the

left index

increments and the

right index

decrements because both of these items are now where they should be, and the process begins again.

This behavior ensures that at all times, everything to the left of

left index

is always going to belong on the left side, and everything to the right of

right index

will always belong to the right of the pivot.

This method of sorting will continue until the left and right indices meet each other and pass each other. So in this example, the left index is pointing to

7

, which is greater than

3

, so we start moving the right index down towards the left until we find an item that belongs on the left of the

3

. So we move the right down, comparing

3

to

8

, then

6

, then

2

. The left and right index will always ignore the actual pivot and skip over it, as the pivot will be correctly placed in the last step. So, our array now looks like this:

Now, the

7

and

2

switch places. With this, the left index and right index move, but now point to the same item,

arr[2]

which is

4

. Even though they’re pointing to the same item, we continue the same logic as before. We compare

4

to our pivot,

3

. It belongs on the right side of it, so we start moving the right pivot looking for something smaller than

3

. Since

4

is not smaller than

3

, we decrement the right pivot.

This gives brings us to our final step. We know from before everything to the right of the right index belongs to the right of the pivot, and everything to the left of the left index belongs left of the pivot. With the right pivot moving past the left pivot, we now know that everything except the pivot is in it’s final place where it belongs.

The right index only passes the left when everything else is sorted, so that means the left index is pointing to the last unsorted item, and this can be simply swapped out with the pivot, giving us an array sorted in place relative to the pivot.

Looking at our array now, we can say that everything right of the pivot is where it belongs relative to the pivot, and everything left of it is where is belongs relative to the pivot. This algorithm can now be applied recursively, to each side of the array. The left side would start it’s left index at it’s original left index again, and it’s right index would be at

pivot-1

. The right side would start it’s left index at

pivot+1

, and the right side’s right index would be the original right index.

Now that we have a high-level overview of how Quick Sort puts items into place, we can start discussing finer details and exploring other questions, such as how to determine the pivot in a way that lets us sort it with the most efficiency.

Pivot Selection

Selecting the pivot for Quick Sort is the key to efficient, or inefficient, time complexity. The worst-case scenario for Quick Sort is O(n^2), yet when done correctly, can be O(n*log(n)). By remembering what we did with Merge Sort, and by looking at both 1) the recursive, dividing nature of Quick Sort, and 2) that the number of comparisons being done grows directly with the rate of input, we can see easily why Quick Sort can be O(n*log(n)). But what behavior causes it to degrade to O(n^2)?

There are two common methods of picking out the pivot that are straightforward and easy, but not necessarily the best: picking the first item, and picking the last item. These can be chosen instead of using a pseudo-random number generator, as using a PRNG multiple times can slow down the machine and effect performance. Let us consider this already-sorted array:

arr = [1,2,3,4,5,6,7,8,9]

. Let’s also say we don’t want to use a pseudo-random number generator, either, so the be quick, we decide to just pick the last item in the unsorted partition of our array. Here,

arr[-1]

is

9

. There is no right side array that winds up being made, the entire left-side array is the rest of the array. That means on the second pass, our pivot is

arr[-2] = 8

, and we continue. In fact, for an array of length n, we make n-1 passes over it, starting at n-1 comparisons, then n-2, so on until we are at the last item. This reveals that this implementation works much the same as Bubble Sort, with an actual complexity of n(n-1)/2, lending us the O(n^2) complexity as the size of input grows. Of course, this happens with already sorted, or mostly sorted lists, when the first or last item is consistently selected. So, Quick Sort should not implement this pivot selection scheme whenever a list may be passed to it in an already sorted fashion.

Knowing this, we can rule out picking just the first or last items as an ideal way of pivot selection. Given this, there are a few ways of selection that can be implemented. Selecting a number at random means that the odds of selecting items in an order that causes it to behave in O(n^2) time exponentially less likely with every consecutive item being passed to it. So, selecting an item completely at random can be an effective method. However, creating a “random” number with a PRNG can be computationally expensive and slow, which can cause it’s own set of performance issues when it has to run several times, as with huge lists.

Optimization and Scalability

In order to run the algorithm at maximum efficiency, the goal should be to create the most balanced left and right partitions as possible. The behavior that causes performance to degrade to O(n^2) arises when the lists are as unbalanced as possible, where all of the elements are partitioned onto one side. The behavior of it’s best case performance, O(n*log(n)), arises when the lists are the most balanced. Therefore, to create the most efficient implementation of the algorithm, we know the following from our analysis:

  1. We should not always select the first item, as that can cause O(n^2) runtime.
  2. We should not always select the last item, as that can cause O(n^2) runtime.
  3. We should not use a pseudo-random number generator, because they are slow and will cause their own performance issues.
  4. We should end each partitioning with the most balanced partitions we can reasonably expect for our best performance.

The trick here is to figure out how to select a pivot from a partition that will leave you with 2 relatively balanced arrays. At first, it may seem to make sense to just pick the item closest to the middle of the array. However, if in doing so, you wind up selecting the first or last items, you end up with heavily unbalanced arrays even though the ones you started with were already evenly partitioned. While this is not likely to happen each time, it’s still not going to lead to the best, consistent performance.

There is one method, called the Median Of Three, which gives rise to reasonably balanced lists. This method requires you to pick the first item, the last item, and the middle item. These 3 items need to be sorted (and since there are only 3 items, something simple like Bubble Sort could be used without worrying about performance). By taking the first, the last, and a middle item, we have a sample of what the type of range we are looking at is. With this sorted set of 3 items, we can select the median item, knowing that the larger and smaller items would create a more unbalanced list. Thus, the median of three will allow you to create the most balanced partitions.

Let’s look again at our first list:

[9,7,4,2,3,6,8,5,1]

. The first item is

9

, the last item is

1

, and the middle item is

3

. When this gets sorted, we get the items

[1,3,9]

. By selecting the

3

, we will create the least unbalanced pair of partitions possible, as the other items are guaranteed to create partitions that are even more unbalanced.

If you find yourself in a situation where you aren’t actually too worried about how slow the PRNGs in your language of choice run, you could easily opt to just randomly select an item from within the partition you’re working with and use that as your pivot. Sometimes it would create really unbalanced partitions, but on average, it would create a pretty balanced pairs of partitions. In the real world, this is often more than sufficient for most use cases.

However, if you find yourself having to scale up, you will want to go back into your codebase and make your sorting algorithms more efficient. Taking out the PRNG and replacing it with a Median of Three implementation can provide a small amount of optimization in two places: first, the PRNG is slow, and selecting the first, last and middle may be faster, and likely will be faster to bubble sort into place and select the median pivot. Second, the Median of Three implementation does not encounter extremely inefficient cases as much as the randomly selected pivot does.

While Quick Sort does decay to O(n^2) in certain cases, it’s ability to sort in place, unlike Merge Sort, means that it may be able to run faster than Merge Sort due to not having to deal with all of the allocating and freeing of working space in memory. Allocating, freeing, writing to and reading from memory take time, and minimizing the read/write operations by sorting in place gives Quick Sort a performance advantage over Merge Sort.

An Example in Python

This example is with Python, and will sort in place, using a median-of-three selection scheme. The median-of-three scheme will only ever be passed 3 numbers, so a bubble sort could be used to easily implement a way to put the 3 numbers in order and select the middle one. We’ll first create a function that gets passed an array of 3 values, and returns them sorted.

def bubble(array): swapped = False for i in range(2): if(array[i] > array[i+1]): array[i],array[i+1] = array[i+1],array[i] swapped = True if not swapped: return array else: return bubble(array)
With this in place, we can start our

quicksort()

function. It will be passed 3 arguments: the array it’s sorting, the left boundary of the partition being sorted, and the right boundary of the partition being sorted.

def quicksort(arr, left, right): if(len(arr[left:right+1])>2): middle = (left + right)//2 three = bubble([arr[left], arr[middle], arr[right]]) if(three[1] == arr[left]): pivot = left elif(three[1] == arr[right]): pivot = right else: pivot = middle else: pivot = right left_index = left right_index = right

If the partition being sorted is more than 2 items long, we’ll pick the first, last, and middle items and pick the median value to be our pivot. If we have only 2, we’ll just pick the right item to be the pivot. The

left_index

and

right_index

are set to the

left

and

right

variables to keep track of the items actually being compared, whereas

left

and

right

themselves will keep track of the bounds of the array being worked on.

Now, onto the main event loop:

 while(left_index<=right_index): if(left_index == pivot): left_index += 1 if(right_index == pivot): right_index -= 1 if(arr[left_index] > arr[pivot]): if(arr[right_index] < arr[pivot]): arr[left_index], arr[right_index] = arr[right_index],arr[left_index] left_index += 1 right_index -= 1 else: right_index -= 1 else: left_index += 1
The

while

loop conditions are set to keep running as long as the left index hasn’t passed the right index. If the

left_index

makes it to the right boundary of the partition without finding anything that belongs to the right of the array, the loop stops. In that scenario, the

left_index

and

right_index

would both be stopped on the right end of the partition.

It starts out by checking whether or not the

left_index

or

right_index

are on the

pivot

. If they are, it moves them in the appropriate direction to skip over the pivot. With the indices guaranteed to be in a proper place, they can start to compare the items to the

pivot

. The item at the

left_index

is compared to the pivot. If it’s smaller, the

left_index

gets incremented and compares itself to the pivot again. If it’s larger, then we start looking for an item pointed to by the

right_index

that is smaller than the pivot. If it’s not, it get decremented and continues the comparisons. When 2 swappable items are identified, they get swapped, and both the

left_index

and

right_index

change because both of those items are now in place and do not need to be compared to the pivot again.

Once the

left_index

has passed the

right_index

, or run to the end of the partition, it’s time put the

pivot

into place:

 if(left_index < pivot): arr[left_index], arr[pivot] = arr[pivot],arr[left_index] pivot = left_index elif(right_index > pivot): arr[right_index], arr[pivot] = arr[pivot],arr[right_index] pivot = right_index
Everything to the left of the

left_index

belongs left of the

pivot

. Likewise, everything to the right of the

right_index

belongs to the right of it. The only exception is now the

pivot

itself. Because the

left_index

is larger than the

pivot

when the

right_index

passes it, the

pivot

should only be swapped with the

left_index

if the

left_index

is still to the left of the

pivot

, as that item will need to be on the right of the

pivot

. If the

left_index

has passed the

pivot

and is now on the right, then swapping

left_index

with the

pivot

will result with an item larger than the

pivot

on the left of it. Instead, the

pivot

will be switched with the

right_index

, which is the last of the items that should be on the left of the

pivot

. After performing the swap, it updates the index of the

pivot

.

And finally, we wrap up with this:

 if(len(arr[left:pivot]) > 1): quicksort(arr, left, pivot-1) if(len(arr[pivot+1:right]) > 1): quicksort(arr, pivot+1,right) return arr
The new left partition is

arr[left:pivot]

, and the new right partition is

arr[pivot+1:right]

. If there is only one item in any of these, we know that one item is in proper place. However, if there are 2 or more, then those items will need to be evaluated and sorted into proper place. The

quicksort()

function can then be called again, with different left and right boundaries for the partitions, recursively until the entire list is sorted.

Our entire

quicksort.py

file looks like this:

def bubble(array): swapped = False for i in range(2): if(array[i] > array[i+1]): array[i],array[i+1] = array[i+1],array[i] swapped = True if not swapped: return array else: return bubble(array) def quicksort(arr, left, right): if(len(arr[left:right+1])>2): middle = (left + right)//2 three = bubble([arr[left], arr[middle], arr[right]]) if(three[1] == arr[left]): pivot = left elif(three[1] == arr[right]): pivot = right else: pivot = middle else: pivot = right left_index = left right_index = right while(left_index<=right_index): if(left_index == pivot): left_index += 1 if(right_index == pivot): right_index -= 1 if(arr[left_index] > arr[pivot]): if(arr[right_index] < arr[pivot]): arr[left_index], arr[right_index] = arr[right_index],arr[left_index] left_index += 1 right_index -= 1 else: right_index -= 1 else: left_index += 1 if(left_index < pivot): arr[left_index], arr[pivot] = arr[pivot],arr[left_index] pivot = left_index elif(right_index > pivot): arr[right_index], arr[pivot] = arr[pivot],arr[right_index] pivot = right_index if(len(arr[left:pivot]) > 1): quicksort(arr, left, pivot-1) if(len(arr[pivot+1:right]) > 1): quicksort(arr, pivot+1,right) return arr

Testing Speed

Now that we’ve gone over Quick Sort and Merge Sort, two different algorithms that typically perform in O(n*log(n)) time, we can write a unit test that allows us to directly compare the two’s performance, and analyze it for ourselves. In this example, I’ll be importing the

timeit

module to keep track of performance.

The test will be simple: I want to create a test array full of random numbers for each test. When the array is ready, I’ll use

timeit

to capture the current time, then run the sorting algorithm. When it’s done, I’ll use

timeit

again to capture the ending time, and calculate the runtime. These runtimes will be kept in their own array, and a thousand tests can be performed. With all of this data, we can find the highest runtime, lowest runtime, and calculate the average. If we do this the same way with Quick Sort and Merge Sort, we can build an apples-to-apples comparison of performance.

def time_test(): times = [] for i in range(1000): test_arr = [] for j in range(1000): test_arr.append(random.randint(1,15000)) start = timeit.default_timer() quicksort(test_arr, 0, (len(test_arr)-1)) stop = timeit.default_timer() exec_time = stop-start times.append(exec_time) quicksort(times,0,len(times)-1) average = sum(times)/len(times) print "Lowest exec time: %s" % min(times) print "Highest exec time: %s" % max(times) print "Mean exec time: %s" % average

This is the timing test I wrote for Quick Sort. I essentially wrote the same thing for Merge Sort, as well, just with a few tweaks. After running the two tests, these were my results:

Here we can see that Quick Sort is faster than Merge Sort, performing nearly twice as fast.

With this function to test performance, we can also explore how the different pivot selection methods effect performance. First, we’ll see how Median of 3, Random, and Right-Side pivot selection stack up on a fully randomized array:

Apparently, calculating the median of 3 is not the most efficient way of pivot selection. It’s fastest runtime was very, very close to the random’s fastest runtime, but it’s highest and mean exec times were the highest of the 3. Randomly selected a pivots had the fastest worst-case performance. Always picking the rightmost part of the partition as the pivot resulted in a much more efficient worst-case performance than the median-of-3, but wasn’t quite as fast as the randomly selected one. However, it’s lowest run time was by far the fastest, taking only 2/3 of the time of the other two. The right-side pivot selection also had a much, much faster average runtime.

Clearly, if the array is going to be completely unsorted, then performing a right-side array selection is the winner. It won’t have to bother randomly generating numbers or sorting items to select the median of 3, and thus will be faster. The completely random nature of the array means the function will tend to create fairly balanced arrays. But what if the array is already mostly sorted, or completely sorted?

To test this, the test function will be slightly modified again to randomly generate an array, but to sort it first, and then pass the sorted array to the function. The performance change was quite stark:

Right-side pivot selection took 30 times longer on average than the Median of 3! The right-side pivot selection works far faster than the other methods but only while it’s passed a truly random, unsorted array. The advantage quickly disappears and becomes a severe performance penalty if it’s passed an array that’s mostly or completely already sorted.

While a randomly selected pivot behaves almost the exact same way in completely random and completely sorted lists, the Median of 3 pivot selection is the clear winner when working with arrays that can be already somewhat sorted. In fact, the Median of 3 selection scheme took approximately 2/3 of the time as the random selection scheme in all categories. This is because the Median of 3, in this case, always creates perfectly balanced arrays, whereas the randomly selected one makes reasonably balanced, yet still unbalanced arrays.

With all of this in front of us, it’s easy to see why the Quick Sort is a favorite among software engineers. It’s reliably fast and efficient, and can be implemented with minimal space requirements. If you enjoyed this or found my examination of this algorithm useful, you can show your appreciation by sharing this or buying me a coffee (although now, I’ll probably be looking for toilet paper instead of coffee). And if you’ve actually read this far, thank you for reading, I hope you enjoyed it.
No tags for this post.

Related posts

O’Reilly pulls the plug on in-person events

In the wake of the COVID-19 virus pandemic, prominent technology conference producer O’Reilly has shut down its events business, permanently. From now on, O’Reilly events will be held online.

The producer of events such as OSCON (O’Reilly Open Source Software Conference) and the Strata Data & AI conference, O’Reilly noted in a March 24 bulletin the impact of the virus on its in-person events division. In response, the company recently switched its Strata conference, which was to be held in San Jose last week, to an online format, drawing more than 4,600 remote attendees.

“Without understanding when this global health emergency may come to an end, we can’t plan for or execute on a business that will be forever changed as a result of this crisis,” said Laurie Baldwin, O’Reilly president. “With large technology vendors moving their events completely on-line, we believe the stage is set for a new normal moving forward when it comes to in-person events.”

Baldwin noted that large technology vendors have moved events online as well. Microsoft, for one, is moving its Microsoft Build 2020 developer conference, originally planned for Seattle in May, to be all-digital.

O’Reilly employees who had been involved in the in-person events business have been let go. In addition to the events business, O’Reilly has a technology publishing business and provides interactive coding events and custom training.

No tags for this post.

Related posts

ROLLING UPDATE: The impact of COVID-19 on public networks and security

As the coronavirus spreads, public and private companies as well as government entities are requiring employees to work from home, putting unforeseen strain on all manner of networking technologies and causing bandwidth and security concerns.  What follows is a round-up of news and traffic updates that Network World will update as needed to help keep up with the ever-changing situation.  Check back frequently!

UPDATE 3.27

  • Broadband watchers at BroadbandNow say users in most of the cities it analyzed are experiencing normal network conditions, suggesting that ISP’s (and their networks) are holding up to the shifting demand. In a March 25 post the firm wrote: “Encouragingly, many of the areas hit hardest by the spread of the coronavirus are holding up to increased network demand. Cities like Los Angeles, Chicago, Brooklyn, and San Francisco have all experienced little or no disruption. New York City,  now the epicenter of the virus in the U.S., has seen a 24% dip out of its previous ten-week range. However, with a new median speed of nearly 52 Mbps, home connections still appear to be holding up overall.”

Other BroadbandNow findings included:

-Eighty eight (44%) of the 200 cities we analyzed have experienced some degree of network degradation over the past week compared to the 10 weeks prior. However, only 27 (13.5%) cities are experiencing dips of 20% below range or greater.

-Seattle download speeds have continued to hold up over the past week, while New York City’s speeds have fallen out of range by 24%. Both cities are currently heavily affected by the coronavirus pandemic.

-Three cities – Austin, Texas, Winston Salem, North Carolina, and Oxnard, California – have experienced significant degradations, falling out of their ten-week range by more than 40%.

  • Cisco’s Talos threat intelligence arm wrote on March 26 about the COVID security threat noting what it called three broad categories of attacks leveraging COVID with known APT participation in each of these categories: Malware and phishing campaigns using COVID-themed lures; attacks against organizations that carry out research and work related to COVID; and fraud and disinformation. From an enterprise security perspective, Talos recommended a few enterprise security recommendations:

-Remote access: Do not expose Remote Desktop Protocol (RDP) to the internet. Use secure VPN connections with multi-factor authentication schemes.  NAC packages can be used to ensure that systems attempting to remotely connect to the corporate environment meet a minimum set of security standards such as anti-malware protection, patch levels, etc. prior to granting them access to corporate resources. Continually identify and remediate access policy violations.

-Identity Management: Protect critical and public-facing applications with multi-factor authentication and supporting corporate policies. Verify that remote account and access termination capabilities work as intended in a remote environment.

-Endpoint Control: Because many people may be working from home networks, endpoint visibility, protection, and mitigation is now more important than ever. Consider whether remediation and reimaging capabilities will work as intended in a remote environment. Encrypt devices where possible, and add this check to your NAC solution as a gate for connectivity. Another simple method of protecting endpoints is via DNS, such as with [Cisco’s] Umbrella, by blocking the resolution of malicious domains before the host has a chance to make a connection.

  • In an FAQ about the impact of COVID-19 about fulfilling customer hardware orders, VMware stated: “Some VMware SD-WAN hardware appliances are on backorder as a result of supply chain issues. As a result, we are extending the option to update existing orders with different appliances where inventory is more readily available. Customers may contact special email hotline with questions related to backordered appliances. Please send an email to [email protected] with your questions and include the order number, urgent quantities, and contact information. We will do our best to respond within 48 hours.”
  • Cisco said it has been analyzing traffic statistics with major carriers across Asia, Europe, and Americas and our data shows that typically, the most congested point in the network occurs at inter-provider peering points, wrote Jonathan Davidson, senior vice president and general manager of Cisco’s Mass-Scale Infrastructure Group wrote in a blog on March 26. “However, the traffic exchanged at these bottlenecks is only a part of the total internet traffic, meaning reports on traffic may be higher overall as private peering and local destinations also contribute to more traffic growth.”

“Our analysis at these locations shows an increase in traffic of 10% to 33% over normal levels. In every country, traffic spiked with the decision to shut down non-essential businesses and keep people at home. Since then, traffic has remained stable or has experienced a slight uptick over the days that followed,” Davidson stated.

“Typically, the busiest time on the network occurs between 6pm and 10pm, that’s when people are home watching streaming video. Although traffic during these hours has increased slightly (with some variance by carrier) it’s not the primary driver for the overall increase,” Davidson stated.   “As more of us use the internet for work and school our traditional busy hour has changed, starting earlier and lasting longer (e.g. 9am to 10pm). Although this new traffic load between 9am – 6pm is considerable, it’s still below evening peak hours. Service providers are certainly paying attention to these changes, but they are not yet a dire concern, as most networks are designed for growth. Current capacities are utilized more over the course of the entire day.”

  • Spanish multinational telecommunications company Telefonica’ said IP networks are experiencing traffic increases of close to 40% while mobile voice use is increasing by about 50% and 25% in the case of data. In general terms, traffic through IP networks has experienced increases of nearly 40% while mobile use has increased by about 50% in voice and 25% in data. Likewise, traffic from instant messaging tools such as Whatsapp has increased fivefold in recent days.

UPDATE: 3.26

  • Week over week (ending March 23) Ookla says it has started to see a degradation of mobile and fixed-broadband performance worldwide. More detail on specific locations is available below. Comparing the week of March 16 to the week of March 9, mean download speed over mobile and fixed broadband decreased in Canada and the U.S. while both remained relatively flat in Mexico.
  • What is the impact of the coronavirus on corporate network planning? Depends on how long the work-from-home mandate goes on really. Tom Nolle, president of CIMI Corp. takes an interesting look at the situation saying the shutdown “could eventually produce a major uptick for SD-WAN services, particularly in [managed service provider]    Businesses would be much more likely to embark on an SD-WAN VPN adventure that didn’t involve purchase/licensing, favoring a service approach in general, and in particular one with a fairly short contract period.”
  • Statistics from VPN provider NordVPN show the growth of VPN usage across the globe.  For example, the company said the US has experienced a 65.93% growth in the use of business VPNs since March 11. It reported that mass remote working has contributed towards a rise in desktop (94.09%) and mobile app (0.39%) usage among Americans. Globally, NordVPN teams has seen a 165% spike in the use of business VPNs and business VPN usage in Netherlands (240.49%), Canada (206.29%) and Austria (207.86%) has skyrocketed beyond 200%. Italy has had the most modest growth in business VPN usage at just 10.57%.

UPDATE: 3. 25:

  • According to Atlas VPN user data, VPN usage in the US increased by 124% during the last two weeks. VPN usage in the country increased by 71% between March 16 and 22 alone. Atlas said it measured how much traffic traveled through its servers during that period compared to March 9 to 15. The data came from the company’s 53,000 weekly users.
  • Verizon reports that voice usage, long declining in the age of texting, chat and social media, is up 25% in the last week. The network report shows the primary driver is accessing conference calls. In addition, people are talking longer on mobile devices with wireless voice usage notching a 10% increase and calls lasting 15% longer. 
  • AT&T also reported increased calling, especially Wi-Fi calling, up 88% on March 22 versus a normal Sunday. It says that consumer home voice calls were up 74% more than an average Sunday; traffic from Netflix dipped after all-time highs on Friday and Saturday; and data traffic due to heavy video streaming between its network and peered networks tied record highs. AT&T said it has deployed portable cell nodes to bolster coverage supporting FirstNet customers in Indiana, Connecticut, New Jersey, California and New York.
  • Microsoft this week advised users of Office 365 it was throttling back some services:
    • OneNote:  OneNote in Teams will be read-only for commercial tenants, excluding EDU. Users can go to OneNote for the web for editing. Download size and sync frequency of file attachments has been changed. You can find details on these and other OneNote related updates as http://aka.ms/notesupdates.
    • SharePoint: We are rescheduling specific backend operations to regional evening and weekend business hours. Impacted capabilities include migration, DLP and delays in file management after uploading a new file, video or image. Reduced video resolution for playback videos.
    • Stream: People timeline has been disabled for newly uploaded videos. Pre-existing videos will not be impacted. Meeting recording video resolution adjusted to 720p.

RELATED COVID-19 NEWS:

  • Security vendor Check Point’s Threat Intelligence says that Since January 2020, there have been over 4,000 coronavirus-related domains registered globally. Out of these websites, 3% were found to be malicious and an additional 5% are suspicious. Coronavirus- related domains are 50% more likely to be malicious than other domains registered at the same period, and also higher than recent seasonal themes such as Valentine’s day.
  • Orange an IT and communications services company aid that has increased its network capacity and upgraded its service platforms. These measures allow it to support the ongoing exponential increase in needs and uses. The number of users connecting to their company’s network remotely has already increased by 700% among its customers. It has also doubled the capacity for simultaneous connections on its platforms. The use of remote collaboration solutions such as video conferencing has also risen massively with usage increasing by between 20% to 100%.
  • Verizon said it has seen a 34% increase in VPN traffic from March 10 to 17. It has also seen a 75% increase in gaming traffic and web traffic increased by just under 20% in that time period according to Verizon.
  • One week after the CDC declaration of the virus as a pandemic, data analytics and broadband vendor OpenVault wrote on March 19 that:
    • Subscribers’ average usage during the 9 am-to-5 pm daypart has risen to 6.3 GB, 41.4% higher than the January figure of 4.4 GB. 
    • During the same period, peak hours (6 pm–11 pm) usage has risen 17.2% from 5.0 GB per subscriber in January to 5.87 GB in March. 
    • Overall daily usage has grown from 12.19 GB to 15.46 GB, an increase of 26.8%.
    • Based on the current rate of growth, OpenVault projected that consumption for March will reach nearly 400 GB per subscriber, an increase of almost 11% over the previous monthly record of 361 GB, established in January of this year. In addition, OpenVault projects a new coronavirus-influenced run rate of 460 GB per subscriber per month going forward.
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
No tags for this post.

Related posts

NEWS UPDATE: The impact of COVID-19 on public networks and security

As the coronavirus spreads, public and private companies as well as government entities are requiring employees to work from home, putting unforeseen strain on all manner of networking technologies and causing bandwidth and security concerns.  What follows is a round-up of news and traffic updates that Network World will update as needed to help keep up with the ever-changing situation.  Check back frequently!

UPDATE: 3.26

  • Week over week (ending March 23) Ookla says it has started to see a degradation of mobile and fixed-broadband performance worldwide. More detail on specific locations is available below. Comparing the week of March 16 to the week of March 9, mean download speed over mobile and fixed broadband decreased in Canada and the U.S. while both remained relatively flat in Mexico.
  • What is the impact of the coronavirus on corporate network planning? Depends on how long the work-from-home mandate goes on really. Tom Nolle, president of CIMI Corp. takes an interesting look at the situation saying the shutdown “could eventually produce a major uptick for SD-WAN services, particularly in [managed service provider]    Businesses would be much more likely to embark on an SD-WAN VPN adventure that didn’t involve purchase/licensing, favoring a service approach in general, and in particular one with a fairly short contract period.”
  • Statistics from VPN provider NordVPN show the growth of VPN usage across the globe.  For example, the company said the US has experienced a 65.93% growth in the use of business VPNs since March 11. It reported that mass remote working has contributed towards a rise in desktop (94.09%) and mobile app (0.39%) usage among Americans. Globally, NordVPN teams has seen a 165% spike in the use of business VPNs and business VPN usage in Netherlands (240.49%), Canada (206.29%) and Austria (207.86%) has skyrocketed beyond 200%. Italy has had the most modest growth in business VPN usage at just 10.57%.

UPDATE: 3. 25:

  • According to Atlas VPN user data, VPN usage in the US increased by 124% during the last two weeks. VPN usage in the country increased by 71% between March 16 and 22 alone. Atlas said it measured how much traffic traveled through its servers during that period compared to March 9 to 15. The data came from the company’s 53,000 weekly users.
  • Verizon reports that voice usage, long declining in the age of texting, chat and social media, is up 25% in the last week. The network report shows the primary driver is accessing conference calls. In addition, people are talking longer on mobile devices with wireless voice usage notching a 10% increase and calls lasting 15% longer. 
  • AT&T also reported increased calling, especially Wi-Fi calling, up 88% on March 22 versus a normal Sunday. It says that consumer home voice calls were up 74% more than an average Sunday; traffic from Netflix dipped after all-time highs on Friday and Saturday; and data traffic due to heavy video streaming between its network and peered networks tied record highs. AT&T said it has deployed portable cell nodes to bolster coverage supporting FirstNet customers in Indiana, Connecticut, New Jersey, California and New York.
  • Microsoft this week advised users of Office 365 it was throttling back some services:
    • OneNote:  OneNote in Teams will be read-only for commercial tenants, excluding EDU. Users can go to OneNote for the web for editing. Download size and sync frequency of file attachments has been changed. You can find details on these and other OneNote related updates as http://aka.ms/notesupdates.
    • SharePoint: We are rescheduling specific backend operations to regional evening and weekend business hours. Impacted capabilities include migration, DLP and delays in file management after uploading a new file, video or image. Reduced video resolution for playback videos.
    • Stream: People timeline has been disabled for newly uploaded videos. Pre-existing videos will not be impacted. Meeting recording video resolution adjusted to 720p.

RELATED COVID-19 NEWS:

  • Security vendor Check Point’s Threat Intelligence says that Since January 2020, there have been over 4,000 coronavirus-related domains registered globally. Out of these websites, 3% were found to be malicious and an additional 5% are suspicious. Coronavirus- related domains are 50% more likely to be malicious than other domains registered at the same period, and also higher than recent seasonal themes such as Valentine’s day.
  • Orange an IT and communications services company aid that has increased its network capacity and upgraded its service platforms. These measures allow it to support the ongoing exponential increase in needs and uses. The number of users connecting to their company’s network remotely has already increased by 700% among its customers. It has also doubled the capacity for simultaneous connections on its platforms. The use of remote collaboration solutions such as video conferencing has also risen massively with usage increasing by between 20% to 100%.
  • Verizon said it has seen a 34% increase in VPN traffic from March 10 to 17. It has also seen a 75% increase in gaming traffic and web traffic increased by just under 20% in that time period according to Verizon.
  • One week after the CDC declaration of the virus as a pandemic, data analytics and broadband vendor OpenVault wrote on March 19 that:
    • Subscribers’ average usage during the 9 am-to-5 pm daypart has risen to 6.3 GB, 41.4% higher than the January figure of 4.4 GB. 
    • During the same period, peak hours (6 pm–11 pm) usage has risen 17.2% from 5.0 GB per subscriber in January to 5.87 GB in March. 
    • Overall daily usage has grown from 12.19 GB to 15.46 GB, an increase of 26.8%.
    • Based on the current rate of growth, OpenVault projected that consumption for March will reach nearly 400 GB per subscriber, an increase of almost 11% over the previous monthly record of 361 GB, established in January of this year. In addition, OpenVault projects a new coronavirus-influenced run rate of 460 GB per subscriber per month going forward.
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
No tags for this post.

Related posts

BrandPost: Edge Computing: 5 Design Considerations for Storage

The arrival of 5G is expected to bring an unforeseen level of network capabilities and lightning-fast data transfer rates. This will set the stage for even more advanced and novel applications enabling everything to be more connected, in real time, all the time. 

It’s not enough to just capture data; you must be able to transfer data at high speeds to unlock the valuable insights that data provides. From the data center to the edge, 5G and high-speed flash storage are enabling emerging IoT use cases from autonomous vehicles to smart cities and the supply chains of the future. When designing storage to support IoT at the edge, you must consider how 5G and your storage choice will impact data center architectures.

By 2022, there will be 422 million 5G connections globally, and 77.5 EB (exabytes) of mobile data traffic per month, which is equivalent to 930 EB a year1. These increases will require changes to edge and core architectures in order to support this tidal wave of new applications and services, and most importantly, data.

Challenges: Complexity and Speed of Data

Today’s challenges with data are heterogeneous. Data is scattered and unstructured in mixed storage and computing environments – endpoints, edge, on-premises, cloud, or a hybrid, which uses a mix of these. Data is also accessible across different architectures, including file-based, database, object, and containers. There are also issues of duplications and conflicts of data.

5G will surely add more complexity to today’s existing challenges. With 5G, even more data will be generated from endpoints and IoT devices, with more metadata and contextual data produced and consumed. As a result, there will be more demand for real-time processing and more edge compute processing, analyzing, and data storage scattered throughout the network.

What is Your Data Strategy?

Each application and use case is unique and has different storage requirements and challenges, including performance, integrity of data, workloads, retention of data, and environmental restrictions. In the past, the capabilities of general-purpose storage greatly exceeded the requirements of networks, data, and applications. Now, with the insurgence of endpoints, edge computing, and cloud computing, storage has to meet advanced use cases and environment demands that general-purpose storage is not suited for. With the move to 5G, companies will need to re-think and architect which data they want to capture, process and keep across endpoints, edge compute and cloud.

5 Edge Computing Design Considerations: Storage

Today and in the new 5G era, storage has to anticipate and meet the conditions and expectations of various use cases, workloads, and environments. To create an environment for data to thrive, there are five key edge design considerations for storage:

  1. Environmental: In what kind of environment will the data be captured and kept? The most critical environmental conditions that can affect storage performance are altitude, temperature, humidity, and vibration. For example, a smart car outside in the desert heat or during a snowstorm will need to withstand extreme temperatures. Sensors in the mountains or on a high-speed train in Japan will need to be resistant to pressure and movement.
  2. Endurance and Workload: How many times can you write to the storage? Is your application write-intensive, such as video recording for surveillance, or read-intensive, such as map navigation and/or music from the car infotainment system? Is your equipment in a hard-to-reach place, such as a video surveillance camera at the top of a building, or behind the secured doors of a bank vault? In these scenarios, a high-endurance storage solution will help limit the frequency of maintenance and replacement.
  3. Data Retention: How long does the data need to be stored? What do you want to process, analyze, and save at the endpoints, at the edge, and in the cloud? For example, a corporate database may require electronic document storage for five years or longer due to governance specifications. Specific data may also be retained for future analytics. The storage solution needs to meet the data-retention policy and capacities required for various applications/use cases and regulations.
  4. Monitoring: How is the data monitored? Who has access to the collected data? How good is the data? With rapid increases in the number of connected devices and edge compute deployments, and the complexity of data being generated, people want to have access to the data at all times. The ability to monitor both the health of the storage device as well as the health of the data is becoming more important to users in order to ensure data integrity and cost management.
  5. Security: How will the data be protected? Typically, data is secured on the host side (CPU), but hosts can be susceptible to tampering. Customers want data also to be protected on the data storage device itself through encryption while data is at rest.

Defining Edge Computing Design from the Endpoints

5G is going to be fast, and it will bring in new, extreme use cases. We need to think differently about edge computing and architectures optimized with the right storage for the right application. Without clearly defining a data storage strategy looking at both user and application needs from the endpoints, through edge and cloud, 5G and future environments will not rise to their full potential.

Forward-Looking Statements:

This article may contain forward-looking statements, including statements relating to the market for Western Digital’s products and the future capabilities and technologies enabling that market. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.

1Cisco Virtual Networking Index (VNI) Mobile Forecast Highlights Tool

No tags for this post.

Related posts

Here’s How Google and Friends Are Contributing—How Can We All Help?

A month ago it would’ve been hard to imagine how life would change in what seems overnight. Quarantines have been declared in many countries, and already around one billion people are staying at home, isolating themselves and some working remotely. The public activities have stopped, drug stores are in deficit of protection masks, and food shops are being cleaned of even primary products.

It sure looks grim, but, believe it or not, things are not as bad as it sounds. It’s tough, and we need to fight the virus together — practice social distancing, isolation at home, wear face protection — among other things. But there’s a brighter side that we all can turn to, various companies that are generously stepping an extra mile to help those in needs.

There are good examples, like Amazon, which has decided to hire additional 100 thousand workers and increase salaries of existing staff to deal with the crisis. Dolce & Gabbana have teamed up with Humanitas university to fund a coronavirus research project, and Giorgio Armani has donated nearly 1.5 million to four Italian hospitals and Civil protection agencies. It looks like private businesses of all interests are putting it up together to help fight the virus.

In this article, I’d like to overview a few tech companies, and it doesn’t matter big or small, that are also helping in any way they can. Not only this might brighten a pretty gloomy view, but also serve as an example for other tech players who might search for an idea on how to contribute to the good cause.

Tech companies that are helping to survive Covid-19

1. Google. First of all, Google has gathered a COVID-19 paid-sick-leave fund, that allows any worker who shows the symptoms of the virus, or can’t continue work due to quarantine, to receive compensation for the sick time off. Moreover, it has made Google Hangouts, an excellent video conferencing service, available for free for G-suite users, which really helps to communicate with friends or to work remotely. And thanks to Google for that, I already used it, and it’s very comfortable.
2. Microsoft. This tech giant has announced that they will continue paying the hourly workers wages for those that support their campus no matter work time. Even though some services are no longer needed as they were before, Microsoft will continue supporting its workers regardless of how many hours now they’re required — they will receive the full salary.
3. NordVPN. The use of Virtual Private Networks have boomed due to millions of people working from home, the need to reach companies intranets and secure home Internet connections have increased significantly. People at NordVPN decided to help European country that suffered the worst (blogpost in Italian language) during this crisis — Italy. They will support Italian universities and academic personnel by providing them NordVPN accounts for free for six months.
4. CASETify. These are custom tech accessories creators, from phone cases to watch bands, but right now with every order, they include ten free sanitizing wipes. Also, they released a UV tech sanitizer for phones and will give any profit coming from here to GlobalGiving’s Coronavirus Relief Fund — a lovely initiative from their side.
5. Tealbook. This is a major data intelligence platform that is used all across the globe, and they decided to offer a free supplier reports until the end of March. This will help businesses to fight coronavirus by getting a report with the relevant supplier that are picked from 400 million biggest global websites as well as Tealbook’s own supplier profiles. This will help businesses to manufacture much-needed items like protection masks, hand sanitizers, medicine and other needed material to survive the virus.
6. OneDine. A company that takes care of guest-side technologies for hospitalities and restaurants is giving away free of charge Tap & Pay Touchless Payments systems to mentioned institutions, to help prevent people touching surfaces and spreading the virus. They will handle the setup, table sensors and tap pay activity, and found a really unique way to help combat the spread of Covid-19.

Find your way to help

These are just a few great examples of private tech companies taking initiatives to contribute to the global struggle against the coronavirus. Of course, there are more, such as Apple giving its retail staff that experienced the symptoms of the virus an unlimited paid-sick-leave. Or a smaller company Meero, that offered large-file transfers that are so helpful for remote work for free.

In my opinion, when put together, all these example show one thing — everybody can contribute in this hard situation. In most times, even common sense helps: think about what you would need in isolation, think about what society needs to survive this virus outbreak, and don’t limit yourself.

As you can see, material support such as sanitizers help, but also cybersecurity is essential, and maybe your company can help with remote work setup?

Online communication and file transfers are all required for a comfortable isolation period, but so is entertainment. Boredom is an enemy too. I’d love to see more various tech companies joining this noble cause, maybe game developers sharing fun games for free, or audiobook retailers giving classics to everyone.

It is my firm belief we all can help somehow, and if you’ve got a great idea don’t hesitate to leave it in the commentaries — maybe something will come out of it.

No tags for this post.

Related posts

Responsive Web Design: Understand And Apply It Once And For All

As a Microverse student for one month, I realized that my first clone webpages didn’t have a responsive layout. I was using pixels in the Navbars, percentages in one section and rem in another, I didn’t have a rule or a standard procedure. The goal was to make it look like the original webpage IN MY SCREEN.

As I advanced through the course, I learned that these units made all the
difference to the webpage’s response in different screen sizes. My code
reviewers would tell me to adjust my webpage because it looked different when they were viewing it.

When I started having these issues, I made a lot of research on the subject,
but I was still a little confused about it. This article’s goal is to create a
simple and direct rule to achieve the required responsiveness any developer needs in their code.

Why Make Your Webpage Responsive?

Right now, I am using a Lenovo Ideapad S145 to write
this article, it has a screen size of 15.6” and the Hackernoon webpage looks
like this:

Now, I was reading an article there, when my Windows Update started (can we control it?), so I had to switch to my mobile to continue reading it. I have a Motorola G7 Plus, with a screen size of 6.2”, this is how the same webpage looks like in this device:

Did you see the difference? Hackernoon’s webpage is responsive enough to change the position and appearance of each section when the screen size changes. You can see those changes by zooming in and out on your PC while looking at their webpage.

Now, if those changes weren’t made it would be very difficult for me to read that article on my mobile phone, as I would need to keep scrolling left and right or zoom in to see some of the content. Check out this great article on Medium!

Solutions For Our Problems

To solve our problems, we have a lot of solutions. In
this article, I’m going to focus on two of them:

• Relative units (rem, em, and percentage);
• Media queries.

Relative Units vs Fixed Units

When using CSS properties like font-size, width, height, margin or any other that needs a size unit, you have some options, and they are divided into two groups: relative units like rem, em and percentage (more info) and fixed units like pixels. The first group is the one we should be using, and
the images below show why.

In this image, we have a maximized window showing two boxes, I used pixels in the top one and percentage in the bottom one. Now, let’s see what happens when I shrink the screen size of my browser.

Can you see the scroll bar? That happens because the top box has a fixed unit value, so its width goes on even if we shrink the window’s size. The bottom box has a percentage value, it responds to the change in the window’s size.

This is the first solution on making your webpage more responsive, but we are not quite done yet, we can improve our projects even further, take a look at the next section!

Media Queries

Now let’s say you have a horizontal navbar in a webpage that has a 1024 px width on an 1440 px laptop, when you look at the page it is fine, you can read anything written in the navbar and it’s very easy to navigate through it:

But, if I switch to my mobile (width: 375px), look at what happens when we don’t use media queries:

The scroll bar is back! Also, did you see what happened to the navbar? That’s bad, isn’t it? I agree! The solution to this problem is using media queries in our CSS code to adjust our layout when the screen size is different. You can take a look at this article from CSS Tricks about media queries. Now, when we add the media query, the above webpage should look like this:
Terrific, the scroll bar is gone, as is the navbar! The thing is, now this page used that menu button (top green area, beside the light icon) which, when clicked, provides that same navbar vertically, isn’t it awesome?

If you want to see the results of your work with ease, you can use Google
Chrome’s Developer Tools and access the Device Toolbar where you can play with the window’s width and see how the changes are applied in each media query breakpoint.

Conclusion

In this article, I tried giving you the basic concepts of some of the tools needed for building a responsive webpage. But if you need a more detailed step-by-step guide, I recommend you to watch this (tutorial) from TutsPlus.
This article shouldn’t be your only source of information, you should take a
look at the links I provided and make some more research before jumping into your project. And never do it like Peter Griffin. Happy coding!
No tags for this post.

Related posts