7 Sneaky Ways Hackers Are Using Machine Learning to Steal Your Data

Machine learning is famous for its ability to analyze large data sets and identify patterns. It is basically a subset of artificial intelligence. Machine learning uses algorithms that leverages previous data-sets and statistical analysis to make assumptions and pass on judgments about behavior.
The best part, software or computers powered by machine learning algorithms can perform functions that they have not been programmed to perform.
Despite the machine learning challenges, this makes it an ideal choice for identifying cybersecurity threats and mitigating the risk. Microsoft did just that with its Windows Defender in 2018. Equipped with multiple layers of machine learning, their software successfully identified and blocked crypto miners before they even started digging. Cyber attackers were trying to install cryptocurrency miners on thousands of computers through Trojan malware, but they failed to achieve their goal thanks to machine learning.
Due to this, machine learning has been extensively used by cyber-security experts. In fact, it is transforming endpoint security by adding accuracy and contextual intelligence. Sadly, cyber-security professionals are not the only one benefiting from machine learning capabilities. Cyber attackers are also using this technology to develop sophisticated malware and cybersecurity attacks that can bypass and fool security systems.

In this article, you will learn about seven ways in which hackers use machine learning to fulfill their malicious designs.

1. Social Engineering Attacks

Humans are the weakest link in your cyber security chain and cyber-criminals are fully aware of that. The increasing trend in social engineering attacks is a testament of that. The main objective of these social engineering attacks was to deceive people into giving out their sensitive personal and financial information or persuade them to take a desired action.

With machine learning, hackers can take it up a notch and collect sensitive data of businesses, employees and their partners. What’s even worse, they don’t need much time to do it as machine learning can replicate attacks based on social engineering.

2. Phishing and Spear Phishing

Cyber attackers are training machine learning algorithms to create real world situations. For instance, hackers are using machine learning algorithms to decipher the pattern of automated emails sent by service providers. This enables them to create fake messages that look identical to real one which makes it almost impossible for the receiver to identify the difference and they end up sharing their user ID and password.

The best way to combat this is to increase cybersecurity awareness amongst your employees. Invest in cybersecurity training programs and test their knowledge by launching mock attacks. This will give you a clear picture about how good your employees are against these phishing and spear phishing attacks. Well trained, cybersecurity aware employees can become an asset as they can not only save themselves from such attacks but can also identify and report these attacks before it is too late.

3. Spoofing

Spoofing creates fake personas of companies, big brands or famous personalities as well as employees in top positions. By harnessing the power of machine learning algorithms, cyber attackers first analyze the target from different perspectives and try to act like a CEO of a company. Next, they start sending malicious emails. This does not end there. In fact, cybercriminals also use machine learning algorithms to understand how the owner of the company writes, publishes social media posts, and sends emails. Once done, they can generate fake texts, videos and voices from it to trick employees into taking their required action. We have already seen what consequences this could have in some voice fraud incidents already.

4. Ransomware and other Malware

Most cybersecurity attacks use malware even though the malware type might vary. It could be ransomware, spyware or trojan horse. By using machine algorithms, cybercrooks are trying to increase the complexity of these malware and making it more sophisticated so it can not be easily detected and eliminated. We are already seeing malware that can change their behavior so they can not be identified by protection systems. The key is to keep your anti malware protection up to date and take backup of your data.

5. Discovering Loopholes

Hackers are always one step ahead of cybersecurity experts in this cybersecurity race. Did you know why? They are constantly looking for vulnerabilities that they can exploit. Once they find a loophole, they capitalize on it and launch an attack. On the flipside, cybersecurity experts take longer to patch those vulnerabilities.

Machine learning can widen this gap and dramatically accelerate the process, as it can help hackers identify these loopholes far quickly. This means that they will not only be able to uncover more gaps in less time, but they can target them too. To give you an idea, an error or bug which can serve as a vulnerability was previously identified by hackers in days will now be uncovered in minutes, thanks to machine learning.

6. Password and Captcha Violations

Most people still use passwords and businesses are still using them to authorize and authenticate users. Even if you are following password best practices and a secure app development process, passwords are not the safest option. Hackers uses brute force attacks to guess your passwords and machine learning can help their cause. It speeds up the process and allows them to discover your passwords far more quickly. Moreover, cybercriminals are also training bots to get over protection barriers such as captcha code.

7. DDoS Attacks

With machine learning at their disposal, cyber attackers can automate different elements and phases of cybersecurity attacks. Let’s say, a cybercriminal is planning to launch a phishing attack. For this purpose, he creates a phishing email. He wants to send this email to different groups at different times. Machine learning algorithms can come to his rescue. After the advent of machine learning, we are seeing hackers using machine learning algorithms to launch and control dangerous DDoS attacks which uses botnets and zombie machines.

What security measures do you take to protect your critical business assets from AI based cyber-security attacks?

No tags for this post.

Related posts

How to Prepare Your Site for Heavy Traffic

1. Monitor your infrastructure.
First of all, you should know what’s happening with your website. If you’re experienced with Prometheus/Grafana, you could use them, but if you’re not, it’s not a problem;  you can use any monitoring service, such as DataDog or any other SaaS service, and set it up really quickly. If it’s still hard, use pingdom or site24x7, at least to check that your website is still available.

Remember, you can control what you want to measure, and the most important thing is that if you don’t know what’s happening inside your system and exactly where it’s happening, you can’t fix it.

There are multiple possibilities of what could go wrong when you get hit by traffic:

1. You’re bound by CPU resources
2. You’re bound by RAM limits
3. You’re bound by your HDD/storage performance
4. You’re bound by the bandwidth on your cloud instance/cluster/server

2. Prepare to scale at 60-80% of maximum load. Whenever you see that you’ve reached 80% of your resource limits, you should start scaling. When you reach 100%, you’ll be down, and it will take time to recover (not to mention it will be very stressful). You should act fast, because you’ll be losing your users, and you might make more mistakes when you’re in a hurry. When you reach 80% of your load, scale until you get it down to 40%, then repeat as necessary.

3. Keep an eye on HDD performance and bandwidth limits, not only CPU and RAM. It’s harder to discover the problem when your performance is hit by IOPS (input/output operations per second) or net bandwidth limits.

4. Watch your database performance, especially when you’re using a cloud database.
RDS, Cloud SQL, MongoDB Atlas and other services are managed by the cloud but they have their own limits and you should watch them and scale when necessary.

5. When your DB hits a CPU check for indexes, that might really help.
Adding indexes dramatically reduces CPU load. Say you’re using 90% of your DB CPU. You might want to scale the server 2x CPU to handle 2x load, but if most of your queries are unindexed, adding indexes might reduce your CPU load by 10x, so it’s worth investigating.

6. Keep an eye on your cloud bills.
It’s easy to forget about your bills when you’re in a rush. Set up budget alerts in your billing system. Bandwidth is especially pricey. If you’re unable to move your content to a CDN or to dedicated hosting services lik 100TB or LeaseWeb, the prices are still high.

7. Avoid state in your app.
Though it’s possible to scale CPU and RAM resources in the cloud, there is still a limit that you can’t overcome. At that point, you’ll want to scale horizontally by adding new instances of the same app—but your app should be ready for it. When you have multiple instances of the same app, your users’ requests are distributed across multiple servers, so you can’t store the data on a local disk.

8. Consider moving to the cloud if you’re on a dedicated hosting.
You can’t easily scale when you’re using dedicated hosting; it would take time to add more servers. It could take anywhere from a couple of hours to couple of days to get new servers available, and usually you pay by the month, not by the hour. You don’t want to wait hours or days if you’re already down. It’s much easier to scale in the cloud.

9. Tune your infrastructure.
There are some basic things that are disabled by default that you might want to configure in your OS, network layer, app management, and programming language manager; they might reduce your resource usage dramatically. Google for “your-tech-stack tuning” and follow the basic recommendations.

10. Be ready to start a minimal/cached version.
Despite any of your efforts, if you get a 100x spike in traffic, you’ll be down. It takes time to scale up, so be ready to serve a static cached version. You might use Cloudfront/Cloudflare cache for this, or your CDN cache, nginx cache, or anything else. Just make sure that you’re able to do it when you need to.

No tags for this post.

Related posts

COVID-19: AI Fighting the Pandemic Terror

Coronavirus: the alarming threat creates mayhem in the lives of many. Despite the terrifying conditions, technologies such as artificial intelligence and big data are mending ways of detecting an outbreak.

At a recent meeting on Downing street, senior executives from tech giants like Google, Amazon, Microsoft, Facebook, and Apple met in an attempt to discuss the pandemic attack. Among the other topics, “modeling and tracking data” was one of the major discussions held.

A similar meeting conducted at the White House raised the same topics asking companies and organizations as to how they could use AI technology to prevent future outbreaks.

According to last month’s (February) report by the World Health Organization (WHO), it was noted that AI and big data played a key role in China, in response to COVID-19.

How?

By fighting misinformation:

At the moment, there has been no detailed study regarding any misinformation on the internet, specifically Google and Facebook, but the information available remains substantial so far.

YouTube is making use of its homepage to direct any user to the World Health Organization (WHO) and other websites to access information and education out of it.

On the other hand, Google said their teams were “working round the clock to safeguard their users from phishing, conspiracy theories, malware, and misinformation.” Also, an SOS alert pops up alongside the page for more information about the virus and preventive measures one needs to take.

By sharing available data:

A social media network, Facebook is already doing its bit of job by working closely with Harvard University School of Public Health and the National Tsing Hua University, in Taiwan. Their study involves sharing data regarding the movement of the people along with a high-resolution population density map that helps them in forecasting the spread of the virus. Additionally, Facebook is using tools such as Crowdtangle to help partners understand how people are addressing this issue online aggregating social media posts.

During the past, Google search data is said to have used trackers that helped tracked infectious disease. In an attempt to help people track their health status, Google also came up with a small patch that could be worn in the body helping transmit data into the phone app.

This body-patch could prove beneficial for elderly people where viral infections have higher mortality and morbidity rates.

By devising robot cleaners :

Sabine Hauert, a professor at the Bristol University tells the BBC news that AI can make our daily lives simpler by using robots. The robot cleaners could clean the hospitals, or they could also make their presence available for consultations, remote meetings, and to connect with loved ones as well.

By helping find the right drugs :

Exscienta, a British startup, and the first-ever company were able to put an AI-designed drug molecule to the human trail in the earlier part of the year 2020. It barely took 12 months for this startup to design the algorithms that were used in creating it, as compared to the years it took for them to research on this AI-technology based drug molecule.

As said by Prof Andrew Hopkins, Emeritus Professor of Sociology, Australian National University, AI could be made useful in multiple ways like:

• Scanning through the existing drugs to check whether this drug could be repurposed or not

• To design a drug that could be used to fight the coronavirus both for the present situation and for future outbreak

• To develop vaccines and antibodies for COVID-19

It is too early to find a cure for pandemic outbreaks. However, with the help of AI, researchers could perhaps do a better job in determining the incidences to take place.

No tags for this post.

Related posts

How to Create a Killer Resume for Your First IT Job

When applying for your first job in the IT industry, you need to stand out from the rest. Your goal is to convince a recruiter that, irrespective of the lack of industry experiences, you are the right person for their company.

In this article, you will learn a few practical tips on writing a killer resume that will help you land your first IT job.

Popular Resume Types

The first one is a chronological resume. It focuses on your recent experiences and work history that are shown chronologically. 

The second one is functional resumes that focus on the relevance of your resume for an employer. In this kind of resumes, your introductory paragraph and professional summary play a fundamental role. This is especially important for people wanting to land their first IT job and that have gaps in their careers.

The third is combination resumes that get the best of both worlds. At the top of the combination resume, you should provide information about your skills and qualifications. Below these sections, you should highlight your work history in a chronological manner.

Structure your Resume Strategically

The structure of your resume depends on multiple factors, such as your skills, experiences, and employer’s expectations. Still, most resumes have the following sections:

Header and Contact Information

A header is at the top of your CV and it usually contains your name and profession. Your contact information should include your phone number, email address, links to relevant social profiles, and your website URL. 

An Introduction/Professional Summary

A professional summary is a brief overview of your career, telling a recruiter who you are, what you do, and why you are the ideal candidate for the job you are applying for. When writing an introduction, always focus on the value you could deliver to the employer. 

Skills

The skills section is vital to IT recruiters looking for candidates with specialized experiences and backgrounds. Many IT professionals decide to list their skills using bullet points and, in this way, make them more prominent in the resume.

Work Experience

This is a central part of your CV, explaining your work history. The format of this section varies, but it often contains your previous company’s name, location, employment date, role, and a list of your responsibilities. As you are just entering the IT sphere, emphasize any internships that may be relevant to an IT employer. 

Education

For young IT professionals that still don’t have impressive work experiences, it is immensely important to focus on education. In most cases, it is enough for you to list the name of the school, when you attended it, and what degree you attained. If you have taken relevant ICT courses or received any industry-specific awards, don’t forget to mention that in your education section. 

While your goal is to impress an HR manager, you should still be honest with them. Remember that it is extremely easy for them to confirm how accurate your statements are.

Additional Information

You want to humanize your CV and make it more relevant to the employer. To do so, emphasize your volunteering experiences, awards, and hobbies. This is one of the most effective ways to explain who you are and help a recruiter understand whether you are the right choice for them.

Apply the Right Design and Formatting

Adding relevant information to your resume is only half the job done. The other half is choosing the right design and formatting techniques. A cluttered and confusing CV is more difficult to read. It may directly impact hiring managers’ perceptions of you as an individual. This is why you should create sleek, aesthetically appealing resumes that will immediately grab HR managers’ attention and inspire them to keep reading it. 

As for formatting, use legible fonts and leave lots of white space to make the text easy to follow. The font size should be at least 11 pt, while margins should be at least 7 inches. Choose color palettes that are pleasant to the eye. Colors are important because they keep your resume engaging and authentic, but you should still avoid any visually busy element that will only overwhelm an HR manager and prevent them from focusing on the resume data. Finally, shorten your resume. It should be 1 or 2 pages max and focus only on those data relevant to the employer. 

Proofread your Resume

You want to show that you are taking the job position and an employer seriously. And, this is where grammatical errors and typos will not help you. Now, catching your own errors may be more difficult than it seems. It is not enough to write a resume and read it a couple of times. Instead, you need to proofread and edit it carefully.

For starters, don’t edit the resume until it is finished. Focus on writing each section carefully and then go back to them and make all the edits needed.

Second, identifying errors in your resume while writing it is difficult. Precisely because of that, you should give yourself a couple of hours before you start editing your CV. Focus on some other activities and get back to the editing process with fresh eyes. 

If this is still not enough, you can always ask a friend or family member you trust to read your resume carefully. Chances are they will catch minor errors that you missed or even provide suggestions on how to improve your resume.

Finally, fact-check everything you have written. Make sure that the name of your school, company, your address, and contact information are accurate.

Over to You

Writing a magnetic resume that stands out is not easy, especially for individuals that are applying for their first IT job. To convince a recruiter that you are the right choice for them, you first need to pick the right resume format and structure it strategically. Optimize your sections and make them informative and easy-to-follow. As you don’t have any work experiences in the niche, emphasize your skills and education. Above all, keep the design intuitive and eliminate any typos that may harm your image.

No tags for this post.

Related posts

Laravel 7.3 Released

The Laravel team released v7.3.0 yesterday with the ability to use ^4.0 versions of ramsey/uuid. Since the release of Laravel 7.2, a few patch releases are available that we’ll briefly cover:

Ability to use Ramsey UUID V4

Laravel 7.3 adds the possibility to use ^4.0 of ramsey/uuid, but still supports v3.7 as well. The composer dependency is now ^3.7|^4.0.

Component Fixes

Laravel 7.2.2 fixes a few blade component issues. Notably, the make:component command now supports subdirectories:

php artisan make:component Navigation/Item # previously creates the following: # View/Components/Navigation/Item.php # views/components/item.blade.php # Now creates them as expected: # View/Components/Navigation/Item.php # views/components/navigation/item.blade.php

Fix Route Naming Issue

Laravel 7 introduced route caching speed improvements, but with that have been a few issues with apps in-the-wild. Laravel 7.2.1 fixed a route naming issue with cache; you should upgrade to the latest 7.x release to get the newest routing fixes.

It’s important to note that you should ensure the uniqueness of route names, as routes with duplicate names can “cause unexpected behavior in multiple areas of the framework.”

Release Notes

The remainder of the updates since v7.2.0 are changes and fixes, which are listed in full below. You can see the full list of new features and updates below and the whole diff between 7.2.0 and 7.3.0 on GitHub. The full release notes for Laravel 7.x are available in the latest v7 changelog:

v7.3.0

Added

  • Added possibility to use ^4.0 versions of ramsey/uuid (#32086)

Fixed

  • Corrected suggested dependencies (#32072, c01a70e)
  • Avoid deadlock in test when sharing process group (#32067)

v7.2.2

Fixed

  • Fixed empty data for blade components (#32032)
  • Fixed subdirectories when making components by make:component (#32030)
  • Fixed serialization of models when sending notifications (#32051)
  • Fixed route trailing slash in cached routes matcher (#32048)

Changed

  • Throw exception for non existing component alias (#32036)
  • Don’t overwrite published stub files by default in stub:publish command (#32038)

v7.2.1

Fixed

  • Enabling Windows absolute cache paths normalizing (#31985, adfcb59)
  • Fixed blade newlines (#32026)
  • Fixed exception rendering in debug mode (#32027)
  • Fixed route naming issue (#32028)

Filed in: News / Releases

Related posts

Tagged : /

What I Learned Trying to Predict the Price of Cryptocurrencies

A few days ago, I presented a webinar about price predictions for cryptocurrencies. The webinar summarized some of the lessons we have learned building prediction models for crypto-assets in the IntoTheBlock platform. We have a lot of interesting IP and research coming out in this area but I wanted to summarize some key ideas that can result helpful if you are intrigued by the idea of predicting the price of crypto-assets.

Here are some interesting ideas:

1)Cryptocurrency price predictions is a solvable problem but not by a single approach and definitely not for all market conditions.

As the great British statistician George E. P. Box once said, “essentially, all models are wrong, but some are useful”. This is specially true when comes to complex entities like financial markets. In the case of crypto-assets, it is definitely possible to predict price movements in cryptocurrencies but no single model is going to be effective across all market conditions. Always assume that, eventually, your models are going to fail and look for alternative.

2)There are two fundamental ways to think about prediction: asset-based or factor-based.

If you are thinking about predicting the price of Bitcoin, then you are following an asset based strategy. Alternatively, factor-based strategies focused on predicting a specific characteristics such as value or momentum across a pool of assets.

3)There are three fundamental technical approaches to tackle crypto asset predictions.

Most predictive models for capital markets, in general, and specifically crypto-assets can be grouped in the following categories: time-series forecasting, traditional machine learning and deep learning methods. Time-series forecasting methods such as ARIMA or Prophet focus on predicting a specific variable based on known time-series attributes. Machine learning methods such as linear regression or decision trees have been at the center of predictive models in capital markets for the last decade. Finally, the new school of deep learning proposes deep neural network methods for uncovering non-linear relationships between variables that can lead to price predictions.

4)Time series forecasting methods are easy to implement but not very resilient.

Throughout our experiments, we tested different time series methods such as ARIMA, DeepAR+ or Facebook’s Prophet. The results led us to believe that these type of methods haven’t been designed for complex environments such as capital markets. They are incredibly easy to implement but showed some very poor resiliency to market variations which are common in crypto. Furthermore, one of the biggest limitations of time series methods is that they rely on a small and fixed number of predictors which proven to be insufficient to describe the behavior of crypto-assets.

5)Traditional machine learning models showed poor generalization capabilities

Methods such as linear regression and decision trees have been at the front and center of quant research in capital markets. From that perspective, there is a lot of research available that can be applied to the crypto space. However, given the unusual behavior of crypto markets, we discovered that most traditional machine learning models have trouble generalizing knowledge and are very prompt to underfit.

6)Deep learning models are hard to interpret but can perform well in complex market conditions.

Deep neural networks are not exactly new but their mainstream adoption has only been possible in the last few years. In that sense, practical implementations of these models are relatively nascent. In the case of crypto markets, we discovered that deep learning models can achieve decent levels of performance when comes to predictions. However, its near impossible to interpret what these models are doing internally given its complexity and they are definitely challenging to implement.

7)There are some very interesting challenges that are not present in capital markets.

Predictive models for crypto assets encounter many challenges that are not present in traditional capital markets. From fake volumes, wash trading to the poor quality of many APIs and datasets, there is a lot of infrastructure work that needs to accompany any predictive efforts in the crypto space. Additionally many of the models included in research papers haven’t really been tested in real world markets and certainly not in crypto.

8)Plenty of challenges but also exciting opportunities.

I hope these notes provide some perspectives about the practical challenges and possibilities of predictive models for crypto assets. We will continue publishing our research and results in this space and would love to get your feedback.

No tags for this post.

Related posts

Pros and Cons of The Gig Economy: Can Remote Workers Be Assets in Small Business?

Advancements in technology have made it possible for businesses to operate as normal with significantly fewer in-house staff. 

The gig economy is growing rapidly, with more freelancers and companies alike possessing the tools needed to quickly collaborate, share work and seamlessly communicate. 

The driving force behind the growth of freelance work can be found in cloud computing, where just about anybody in possession of an internet connection across the world can effectively work for a business elsewhere.

(Figures suggest that the majority of US citizens will be engaging in some form of freelance work by 2027. Image: Website Planet)

As the cloud continues to leverage remote work, it’s widely expected that the gig economy will continue to grow throughout the 2020s. This can potentially be seen as good news for small businesses operating within a smaller working environment. With this in mind, let’s take a deeper look at the pros and cons for the gig economy, and explore whether remote workers can be an asset for small business owners:

Pros:

Cost-effective employment

Of course, the greatest asset freelancers bring to small businesses is the lower costs. Not only do you get to save on the price of a salary, but also company perks, health insurance and office usage are saved.

This is particularly useful if you’re planning on scaling in the near future and need to keep on top of your budgets. Without the need for catering to fixed salaries, you can manage your company money much better. 

Less risk

The lower costs associated with hiring freelancers will always mean less risk, but it’s especially pertinent when you’re managing a startup and the short term future of your business is not assured. 

Since you only need to hire and pay freelancers when you have specific jobs to be done, it saves lots of time and money when compared to hiring somebody who may not always have a discernible workload on hand. 

‘Downtimes’ can be part and parcel of a business that’s still establishing itself, so by hiring freelancers, you can go some way in limiting the risks involved in bringing in-house staff into your organisation. 

Strong vetting processes

There’s no shortage of places where businesses can find their ideal freelancers, and platforms like Freelancer.com, UpWork, Fiverr and Thumbtack are great solutions that offer strong vetting processes to ensure that you only recruit the right worker for the task at hand. 

While the process of finding a freelancer was formerly arduous, digital platforms have made it a fairly speedy and streamlined job. This will be particularly good news to entrepreneurs who will undoubtedly have lots on their plate as their company establishes itself in a new market. 

Many platforms not only have a strong vetting process for applicants, but also run a review-based system for workers, so potential employers will be able to see an Uber-style star rating for each potential freelancer they recruit – helping to limit the possibility of a poor performer or bad fit for your business. 

Cons:

No chance of supervision

You may be capable of getting in contact with your freelancer, but there’s little chance of you supervising them in a similar way to that of an employee. This means that you’ll never really know the time they invest in your work, or how well they’re sticking to your brief, or there are certain aspects of their task that they don’t understand. 

Furthermore, freelancers can’t be trained by in-house staff, meaning they may be unable to comply with your own house styles or standardised practices. 

Fundamentally, there’s always the danger that you’re getting billed for a freelance worker to spend 30% of their hourly rate sat on YouTube. As an entrepreneur who needs to give your company the best chance of survival as possible, this lack of clarity could prove costly. 

Quality won’t be assured

Despite there being strong vetting processes in place, the quality of the work you receive from your freelancer won’t be assured. 

There could be a range of unforeseen limitations in the work your freelancer produces, or worse, it could’ve missed some significant parts of the brief that was sent. 

Because there’s little chance of supervision, there’s a higher risk that you’ll receive work that isn’t right for your business. This could cost more time and money in correcting underlying errors in-house. 

Lack of loyalty

Naturally, freelancers will have less interest in your business. They’ll likely be working for a number of companies at the same time and their primary focus will be on making ends meet. 

While it’s perfectly reasonable for a worker to put their own needs first, freelancers will have less vested interest in the performance of your business and less reason to stick around than in-house workers. 

Virtually every freelancer you use will be juggling multiple jobs at the same time, so their loyalty to one specific brand will generally be lower than that of a full-time member of staff. For many businesses, this may not matter, but it could be important if, say, you’re looking to use a freelancer on a recurring basis to convey a consistent tone of voice within your on-site content, for example. 

The gig economy is expanding at a rapid rate, and as technology continues to evolve, the market will carry on its rate of growth. 

There are plenty of positives to take on board when utilising the help of a freelancer, but some negatives could prove to be a sticking point for certain small businesses. Be sure to explore your options, allocated budgets and cash flow projections before deciding between freelancers or in-house staff. Technology has offered us an unprecedented level of choice when it comes to dealing with our operations, it’s up to you to decide whether you’re ready to delve into the gig economy or not.

No tags for this post.

Related posts

There Are No Excuses Left for NOT Migrating to JAMStack

This article will help you learn about JAMStack development techniques and benefits and will help you get to move forward on your migration path.

What you’ll learn:

1. How to sell JAMstack and Headless CMS to your peers and managers

2. How to design a migration path from a traditional site to a JAMstack site

3. How to program CDN rules and create your DevOps

This article was pulled from content created for the webinar with James Vidler and Joel Varty from March 6th, 2020. Check it out here.

What is JAMStack?

JAMStack stands for JavaScript, APIs, and Markup. JavaScript is the programming language, APIs to drive content and data (like Agility CMS’s Content Fetch API), and Markup to power the UI, including HTML and CSS.

That’s the definition we’ve all started getting familiar with, however, our understanding of the benefits of JAMStack and led us to what’s become something much richer – and it’s being opened by frameworks like React, Vue, Angular and Svelte that have revolutionized how we build User Interfaces.

JavaScript Development Frameworks

The vast majority of the millions of software developer worldwide know JavaScript, and the majority of those are familiar with or are using React on a regular basis. While other frameworks like Vue.js are also gaining big traction, React is still the king.

What’s really interesting is that JAMStack developers are using Static Site Generators build ON TOP of the base frameworks to do amazing things. Now we can use front-end development frameworks to build website that normally we’d build with PHP, Java or ASP.Net.

Static Site Generators

Here are four of the top Static Site Generators available today.

Gatsby

Gatsby is probably the fastest-growing framework on the planet today. Gatsby is based on React, but also has a cloud service you can optionally use called Gatsby Cloud – it takes care of your Preview and Builds – and it’s lightning-fast.

I’ll be writing more about Gatsby soon, so stay tuned for that.

Next

Next is another React Framework tool that allows you to do Static Site Generation, but also has the Zeit / Now hosting framework. A very recent update allows you optionally use Server Rendering for some routes in your site if necessary and includes the ability to easily to Preview, something that only Gatsby was offering before.

Looks like we’ve got a real feature race going on here with the top 2 Static Site Generators!

Nuxt

Nuxt is based on Vue.js. I consider it somewhat of a “copycat” framework, but if you’re not into React, you can certainly do most of the same stuff here that you can in Next.

Eleventy

I really love Eleventy – it’s super simple, easy to learn, and it’s not based on any other framework. Vanilla JavaScript for the win!

It’s great if you want to get your dynamic content from a CMS into an existing static HTML site.

JAMStack DevOps

With Static Site Generators, you have a build server instead of a web server. Your code is combined with your CMS Content (and any other data you have) and it spits out Static HTML, CSS, JS.

Now you can serve these files quickly and easily through a CDN – allowing worldwide delivery and a lot of other benefits.

Better Page Speed & Reliability:

– Your LightSpeed scores tend to be much higher

– Time-to-first-byte tends to be way faster

– There’s no web-server to maintain or upgrade!

Better UX Possibilities:

– One of the side-effects of using these amazing front-end frameworks is that we can take advantage of all the great techniques and components that made them popular in the first place.

Simplify Workflow & DevOps:

– Content is now part of your DevOps pipeline – we can no longer circumvent that

– This means that any testing you’ve got set-up or automated approvals happen AFTER the build happens, your content can be part of that process now.

– Your builds are ATOMIC – you can easily roll back to yesterday’s build if you find a problem in the current build, and you can more easily test a build completely and reliably – it’s just a folder with files in it!

More Secure:

– In terms of security – there are simply fewer ways to Attack your site – Few Attack Vectors we call it.Again – No web-server!

Hosting JAMStack Websites

Even though we’ve got static HTML files, we still have to host them somewhere.

You can either go with the major cloud providers such as Azure and AWS, or you can take a look at some of the new-school options that do a ton of the work for you.

Azure and AWS

Azure and AWS are the 2 biggest players in Cloud today. If you have a requirement for compliance or security or your current infrastructure is already there, you are fine to move forward with JAMStack hosting there.

With Microsoft Azure, you’ll need to copy your files to Blob Storage and turn on “Static Site” capabilities. Check their docs here.

Now you can use that as your origin for whatever CDN provider you like best – including Azure CDN.

I’ve got some other ideas on that, though, so keep reading :).

If you’re on AWS, you be copying your files to their S3 service, which also has the ability to host static sites. Their docs are here.

Again, you’ll still need to use a CDN, and you *can use AWS Cloudfront, but you might want to use a different option after you read the rest of the article.

Netlify

As a hosting platform, they provide a ton of value. They are my top recommendation right now – and what’s cool is that the CDN is built right into the offering, and it does a great job of giving you a build process and the ability to control your builds atomically with rollbacks.

Zeit

Zeit is also a really exciting platform that’s evolving quickly. It provides CDN out of the box and has a new Preview capability, as well as the ability to have any one of your routes by served statically or server-rendered. That’s really cool if you can’t decide whether you want to go 100% static yet.

Start Your Migration

We believe that good Headless CMS can help brands to never have to rebuild their website from scratch ever again.

We often talk about being agile, and this leads right into that concept. You want to be able to take small chunks of your invested resources and incrementally build on them over time. What this means for us right now is that we can take advantage of JAMStack without rebuilding everything.

Edge Computing 101

Normally with a static site, the browser requests resources from a CDN, which serves the static HTML files up. That’s what we want to get to, but do we have to REBUILD the entire site from scratch if we want to do this?

What if we could put something in between the CDN and the static files? That way we could keep using the old site for some pages, but incrementally rebuild the sections of the website that we want to migrate right now.

We can use a technology called Service Workers – although they can be known and configured in many different ways.
It means that your CDN can be smart enough to know when to serve your static site and when to serve the legacy site that you aren’t finished rebuilding yet.

CDN Providers – Service Workers & Proxy Rewrites

Stackpath and Cloudflare are great options for CDN if you already have your files stored somewhere like Azure Blob Storage or Amazon S3. You can configure Web Application Firewall, CDN rules, and most importantly, Service Workers to perform logic at the “edge” that will determine where to source the origin content for what specific paths.

Remember how I told you that Netlify was really exciting? I wasn’t kidding! They’ve taken all the common scenarios that you may come across and created easy steps to solve them. Those of us at Agility CMS has been creating example cases for how to work with Netlify, and it’s amazing how seamless it all is with a great headless CMS.

Netlify takes a different approach to the other providers and has a whole system for specifically doing redirects and rewrites.

In the example above, we are routing everything under the /posts/ route to a legacy site. Similarly, we are doing the same thing with the /about-us and /search routes.

This is great if you set up your new site in Netlify and want to proxy your old site through it.

There are a few “gotchas” that you need to beware of with these techniques. First of all, you don’t just have page routes in your websites – there are also CSS, Javascript and other secondary resources to keep in mind – you may have to handle those cases specifically for your site.

Caching

You also want to keep in mind that a CDN wants to cache your site –
and this can be a GREAT thing! Your site will automatically be served more quickly! On the other hand, though, you still want to be able to update your content, so you may want to set the cache duration to a fairly low value or add logic to your Content Workflow to invalidate the cache when content changes.

DNS

CDN providers need to be integrated into your DNS. This can be tricky to get just right. I’ve done this a few times, and it gets easier the more you do it, so some time testing out the process on a dummy domain that you control is always advised.

Always Keep Learning

I’d love to connect with you if you have comments or questions on JAMStack or the techniques I’ve outlined above.

No tags for this post.

Related posts

InnerSource: A better way to work together on code

With the coronavirus COVID-19 taking the world by storm, and everyone tightening down in their bunkers, it seems like a good time to think about how we work together. Because of the virus, technology conferences have shuttered and even frequent office dwellers and meeting makers are learning how to work remotely. Open source software development is usually done remotely, so maybe by borrowing some of the methods of open source development, we can all find better ways to work together and stay connected?

I spoke with Danese Cooper about “InnerSource” or using open source methods to develop internal or proprietary software. Cooper is a technology executive, long time open source advocate and now president of the InnerSource Commons Foundation. 

(Disclosure: I served with Danese Cooper on the board of the Open Source Initiative and I have worked with her in other capacities. She describes us as frenemies.)

What is InnerSource?

InnerSource is so named to distinguish itself from open source. Unlike open source, InnerSource is developed inside your company. According to Cooper, “InnerSource is the use of open source methods inside the firewall in a proprietary company because it’s a better way to write software.” Some companies who learned how to collaborate using InnerSource also find that they can collaborate more publicly in open source.

No tags for this post.

Related posts

Git 2.26 fetches faster by default

With the recent release of Git 2.26, the open source distributed version control system uses version 2 of Git’s network fetch protocol by default.

This protocol, introduced in 2018, addresses a problem with the old protocol, whereby a server would immediately list all of the branches, tags, and other references in the repository before the client could send anything. Use of the old protocol could mean sending megabytes of extra data for some repositories, even when the client only wanted to know about the master branch.

Git’s new network fetch protocol begins with the client requests and offers a way for the client to tell the server which reference it wants, making fetches from large repos much faster.

Other capabilities in Git 2.26, which can be downloaded from the project website, include:

  • New config options, including the ability to use wildcards when matching credential URLs. Git config options can be set for all connections or only connections to specific URLs.
  • Updates to sparse-checkouts, which provide a way to have only part of a repository checked out at a time. A git-sparse-checkout add mode allows for adding new directory entries one at a time.
  • The git grep repo search capability is now faster.
No tags for this post.

Related posts