The Future of Distance Learning [Infographic]

As a result of COVID-19 officially being ruled a pandemic, the eLearning economy is being used in an entirely different way. Now, the students who have faced an educational standstill due to their campuses and schools being closed can continue their educational journey from a virtual standpoint. Interestingly enough, online classes seem to be the only way to keep our students’ brains turning during this pandemic. However, it’s important to understand that eLearning isn’t new – it’s just in its prime, unfortunately to the credit of our detrimental state of human health.

With this being said, it’s unlikely that online learning will fade even after we defeat this virus. In other words, students are loving their new approach to education. The latest tech trends in eLearning include virtual reality lessons to eliminate distractions and increase engagement, augmented reality to deliver a more capturing learning experience, artificial intelligence such as chatbots, and multi-category video. After being exposed to high-level EdTech, more students than ever want to remain enrolled in e-school until graduation.

As previously mentioned, online learning isn’t new. 63% of those who were already attending e-school before the COVID-19 outbreak say they choose online education due to the fact that it works best with their current work/life balance. However, the rise of EdTech has made more than 40% plan to return to their alma mater to take more classes. 

Overall, the COVID-19 pandemic has given the eLearning market a major boost in revenue. By 2025, the eLearning market will reach $300 billion in value, and 72% of organizations believe eLearning puts them ahead of their competition. Here’s how: for every $1 a business spends on distance learning software, their productivity increases by $30. This is because students who learn online take in 5x the material per hour when being trained. 

In the midst of an epidemic, navigating your way through the online learning market should be easier than ever. Will you be partaking in the future of distance learning
No tags for this post.

Related posts

Awesome-Compose: Application samples for project development kickoff

Software systems have become quite complex nowadays. A system may consist of several distributed services, each one providing a specific functionality and being updated independently. Starting the development of a project of such complexity is sometimes time consuming, in particular when you are not already familiar with the software stack you are going to work on. This may be because, most of the time, we need to follow rigorous steps to put together the entire project and if we make a mistake in-between, we may have to start all over again.

As a developer, getting a quick understanding of how all the stack is wired together and having an easy to manage project skeleton may be a very good incentive to use that particular stack for future projects. 

Furthermore, there are plenty of open-source software stacks that developers can use for their own setups. Providing them with a straightforward way to deploy these software stacks in order to check them out is an important matter when looking to simplify software development and allow developers to explore different options. 

To tackle this, we have put together a Github repository with application samples that can be easily deployed with Docker Compose. The repository name is awesome-compose and it contains a curated list of Compose application samples which can provide a good starting point for how to integrate different services using a Compose file and to manage their deployment with Docker Compose. 

The awesome-compose repository was created to provide a quick and simple way for developers to experience Compose-based setups with different software stacks.  Moreover, we hope that you are willing to share your best examples of compose files for different stacks that are not already in the repository or to improve the current samples for everybody to use.

The setups provided currently in the awesome-compose repository fit mainly in two categories:

  • Application skeletons: useful for kicking off project development. We can find different application skeletons with multiple services already wired together and ready to be launched with docker-compose;
  • Setups with different open-source software stacks. These are not production ready, they are intended mostly for personal/home use or simply for developers to get familiar with them within their local dev environment. 

All samples in the repository are provided as they are, everybody can customize each sample according to their needs. 

We will discuss further each category and what are the benefits they can provide.

Kickoff a project with an application sample

To be able to run the samples from the repository, make sure you have already installed:

Then, either git clone or download one or more samples from the awesome-compose repository.

$ git clone https://github.com/docker/awesome-compose.git
$ cd awesome-compose

At the root of each sample there is the docker-compose.yml. It contains the definition and the structure of the application and instructions on how to wire the components of the application.

Identify the sample that matches your requirements and open its directory. Sample’s directory names follow a very simple pattern consisting of component names separated by ‘-’. This allows us to quickly identify the sample we need for our project. For this exercise, let us use the nginx-flask-mysql sample.

There are a few simple steps to follow to get the application skeleton up and running and be able to modify it.

Deploy the application sample

Open the sample directory and run it with docker-compose:

$ cd nginx-flask-mysql
$ docker-compose up -d
Creating volume “nginx-flask-mysql_db-data” with default driver
Building backend
Step 1/8 : FROM python:3.8-alpine
3.8-alpine: Pulling from library/python


Creating nginx-flask-mysql_db_1      … done
Creating nginx-flask-mysql_proxy_1   … done
Creating nginx-flask-mysql_backend_1 … done

Check there are three containers running, one for each service:

$ docker-compose ps
Name                          Command        State        Ports
—————————————————————————————————————————————————————————————————————————————————————————————
nginx-flask-mysql_backend_1   /bin/sh -c flask run –hos …   Up   0.0.0.0:5000->5000/tcp
nginx-flask-mysql_db_1        docker-entrypoint.sh –def …   Up   3306/tcp, 33060/tcp
nginx-flask-mysql_proxy_1     nginx -g daemon off;          Up   0.0.0.0:80->80/tcp

Query port 80 of the proxy container with curl or in a web browser to have the backend pass the data from the DB:

$ curl localhost:80
<div>Blog post #1</div><div>Blog post #2</div><div>Blog post #3</div><div>Blog post #4</div>

Modify and update the application sample

Let us assume that we have to change the application server, in this case being the backend service which is implemented in python using the Flask framework. The method returning the message we queried previously can be found below.

@server.route(‘/’)
def listBlog():
  global conn
  if not conn:
      conn = DBManager(password_file=‘/run/secrets/db-password’)
      conn.populate_db()
  rec = conn.query_titles()
  response = ”
  for c in rec:
    response = response + ‘<div> ‘ + c + ‘</div>’
  return response

Assume we change this method to remove the html tags:

@server.route(‘/’)
def listBlog():
  …
  for c in rec:
    response = response + ‘ ‘ + c + ‘ ‘
  return response

As all our containers are already running, the logical workflow would be to stop the backend service, rebuild its image and re-run to get our change in. Doing this would be very inefficient during development.

We could in turn, hack the docker-compose file and add under the backend service the following setting:

backend:
    build: backend
    restart: always
    volumes:
     ./backend:/code
    …

This would instruct Docker Compose to mount the backend source code to the container path from where it is being executed on container start.

Now, all we have to do is to restart the backend to have the changed code executed.

$ docker-compose restart backend
Restarting nginx-flask-mysql_backend_1 … done

Querying again the proxy, we can observe the change:

$ curl localhost:80
Blog post #1 Blog post #2 Blog post #3 Blog post #4

Cleanup deployment  and data

To remove all containers run:

$ docker-compose down
Stopping nginx-flask-mysql_backend_1 … done
Stopping nginx-flask-mysql_db_1 … done
Stopping nginx-flask-mysql_proxy_1 … done
Removing nginx-flask-mysql_backend_1 … done
Removing nginx-flask-mysql_db_1 … done
Removing nginx-flask-mysql_proxy_1 … done
Removing network nginx-flask-mysql_backnet
Removing network nginx-flask-mysql_frontnet

Adding the -v parameter to the down command ensures all data hosted by the db service is being deleted:

$ docker-compose down -v

Removing volume nginx-flask-mysql_db-data

To conclude this part, the samples provided in the awesome-compose repository may help developers to put together all the components for their project in a matter of minutes. This is most beneficial in particular for beginners in the development with containerized applications that can be managed with docker-compose.

Setups for different software stacks

The second type of samples that awesome-compose repository contains are compose files for setting up different platforms such as Nextcloud, WordPress, Gitea etc. These samples consist mostly of a Compose file defining a basic setup for each of the components. The purpose for these is to provide developers an easier introduction to different software stacks they could take a quick view at what they offer and tinker with. 

Let us consider the Nextcloud setup for the next exercise. Nextcloud is an open source file sharing platform that anyone can install for their private use. The setups in the awesome-compose repository are pieced together according to the instructions on the Nextcloud’s official image page in Docker Hub.

To deploy it, select the directory of the nextcloud sample you prefer:

$ cd nextcloud-postgres/
$ ls
docker-compose.yaml  README.md

And run it with docker compose:

$ docker-compose up -d
Creating network “nextcloud-postgres_default” with the default driver
Creating volume “nextcloud-postgres_db_data” with default driver
Creating volume “nextcloud-postgres_nc_data” with default driver
Pulling nc (nextcloud:apache)…
apache: Pulling from library/nextcloud

Creating nextcloud-postgres_nc_1 … done
Creating nextcloud-postgres_db_1 … done

Check that containers are running:

$ docker-compose ps
Name                      Command                         State     Ports
—————————————————————————————————————————————————————————————————————————————————————–
nextcloud-postgres_db_1   docker-entrypoint.sh postgres   Up        5432/tcp
nextcloud-postgres_nc_1   /entrypoint.sh apache2-for …    Up        0.0.0.0:80->80/tcp

$ docker ps
CONTAINER ID   IMAGE              COMMAND                 CREATED
   STATUS              PORTS                NAMES
a1381bcf5b1c   nextcloud:apache   “/entrypoint.sh apac…”  14 minutes ago
   Up About a minute   0.0.0.0:80->80/tcp   nextcloud-postgres_nc_1
ec66a5aff8ac  postgres:alpine “docker-entrypoint.s…”    14 minutes ago
   Up About a minute   5432/tcp             nextcloud-postgres_db_1

In only a few minutes (depends on the internet connection) we get a nextcloud platform up and running on our local machine. Opening a browser window and going to localhost:80 should land us on the Nextcloud’s initialization page:

Similarly to the first exercise, to remove all containers run: 

$ docker-compose down
Stopping nextcloud-postgres_nc_1 … done
Stopping nextcloud-postgres_db_1 … done
Removing nextcloud-postgres_nc_1 … done
Removing nextcloud-postgres_db_1 … done
Removing network nextcloud-postgres_default

Use the -v parameter to delete the volumes where the nextcloud data gets stored. Identical steps should be followed for all other samples of useful software stacks.

Summary

Lowering the barrier in deploying different software stacks, enables more and more developers to have a look at them and potentially use them for their projects.

The awesome-compose repository was created with the purpose of aggregating compose files and application samples that may be useful to anyone interested in containerized applications.

Call for contributions

All developers already familiar with compose and that have already set up interesting stacks that may be useful to others, are highly encouraged to add them to the repository and share them.  Improvements on the current samples are also very much appreciated!

No tags for this post.

Related posts

Now A Good Time To Talk Immersive Experiences As The Future of Events?

Photo by Eddie Kopp on Unsplash

Even before the world went on lockdown, the future of events was set to take flight with Augmented, Virtual, and Mixed reality trends, which are collectively called immersive experience.

The market size of this combined sector is expected to hit $30 Billion by 2030.

Moreover, the consumer spending on AR & VR technologies are to reach $7 Billion by 2020, which is more than any other industry based on its standalone capacity. 

However, creating an experience that genuinely immerses your audience is not an easy task, whether you are developing a VR game or throwing an event. Event attendees of this generation want more than just the usual experiences,. Impressing them will need a revamp of conventional performances with out of box thinking, for an exceptional event. They expect an experience that enlightens and inspires them.

Immersive experiences transform an ordinary event into an unforgettable adventure and make engaging with a brand feel natural. This brings about an opportunity for brands to take a closer look at the lifestyles and interests of their communities. 

The most immersive experiences can be achieved by surrounding the event with engaging technological aspects. If used well, technology can be a powerful tool for event marketers catering to any industry.

So here are a few ideas to ride the wave of immersive experience.

3D Mapping Projections:

Due to their size and scope, 3D images can be mapped to diverse surfaces for projections and illumination. 3D mapping can create a dramatic impact on a large scale at a lower cost than many.

It has the power to transform the events more mesmerizing. Artists and advertisers commonly use it give a new immersive dimension to the images and videos.

To illustrate, on Montreal’s 375th anniversary, they turned the entire event more immersive with 3D projection mapping technology. They intertwined art, culture, and technology in an equally impressive proportion there. And today, it is quite a common trend across the event industry to tell the stories via this appealing technology.

Until recently, a study found that 3D mapping campaigns can run for a day or week and yet failed to make an impact on marketers. The primary reason for this outcome is the lack of understanding of technology by event planners and marketers. 

As a result, only a few people have seen this magnificent phenomenon, and yet the potential for decision-makers who are willing to pioneer this technology is enormous.

Ambient Interactivity: 

When you combine digital and physical interactions yet another form of immersive technology is created “Ambient Interactivity”. This form of technology offers user-friendly, natural interactivity that is well suited for event marketing. 

The core software technology that helps to bring this ambient interactivity is nothing but artificial intelligence. It is, in fact, a kind of intelligence with more sophisticated algorithms dubbed as ambient intelligence. 

Ambient interactivity is well-referred to as the interactive form developed within physical environments such as interactive windows, walls, and tables. However, it is becoming more fluid today. Today, ambient interactivity is giving scope for many sectors, such as to engage their audience. 

Marcus Wallander (Digital creative official @ Great Works) says ambient interactivity is such a powerful tool for every sort of retailers to attract the customers than any other media.”

Ambient interactivity is designed primarily as a futuristic concept presentation mode that avails information sharing inside the organization. This technique demonstrates how technology can be transmitted automatically and transparently without direct human interaction. 

Augmented Reality

Events of all sorts have taken Augmented Reality as a serious business. Eventbrite report says 87% of event managers are largely favoring augmented Reality in the last year. However, it is not based on an abstract conception but factual data. 

Augmented Reality has taken over the world by storm in recent years. For example, AR has allowed apps like Pokemon GO attain incredible success, that surpasses Twitter in active daily users. The launch of Pokemon GO paved the way for AR apps to launch its first real commercial into the lives of millions. 

And its success can make a massive impact on significant implications for big events. The success of AR is not going anywhere; if anything, it’s expanding into AR wearables, which allows event managers to design their activities thematically.

This scope of AR allows event handlers to transform their venue into a tropical paradise or the top of the Eiffel tower. Virtual assistants are tailoring to your individual needs using customised programs catering exclusively for you throughout the event.

Virtual Reality

Virtual Reality is gaining popularity all around the globe. It got an exploding reach over multiple events as well as the commerce sectors over these years. Even with the expensive implementation process, VR can be used to conjure up new worlds for its users. 

VR takes us one step forward, one step ahead in time and our ways to everyday life increasingly. Snapchat gave us the real picture of the impact of VR trend that our world has by providing a unique streaming experience of Golden Globe through their spectacles.

In addition, brands like Lufthansa, Huawei, Chevrolet etc. have been immersing their audience with Virtual Reality experience.

Plus, industries such as events, travel, and music are largely leveraging this deeper immersive model to engage their audience in a better way, for it surpasses the limitation of augmented intelligence far higher and allows users to remotely access events via their headsets and get themselves front row seats.

VR is impressive at a level unachievable by any other tech in providing immersive experience and bringing in traction amongst event handlers.

Supernet

Many of the technological innovations of the future are, in fact, on our doorstep. An interesting one amongst them is Supernet. By utilizing the full potential of the internet within the event industry, you can exploit an immersive experience like never before, which can be streamed to absolutely anyone anywhere.

This is just scratching the surface of it; Supernet will undoubtedly see an unimaginable increase in the speed of data, which will open up new possibilities for significant events allowing attendees to upload and download terabytes of data.

Importantly, Supernet will reduce the latency time and propels the data exchange speed across the network. As events are more likely to adopt immersive technologies this improvement in network speed undoubtedly helps them to perform the events uninterruptedly.

For example, major corporations like Google X are investing highly in this technology via projects such as Loon. However, the Supernet will soon become the standard for the venue hosting events since network speed is an integral part of this technology.

Big Data

With the implementation of Supernet, Big data can benefit immersive technologies mainly as a resource that has phenomenal potential.

As the network carries more data with accuracy and stability, the major problems that have been confronting the sector will reduce.

If data from events is combined with the Supernet, you could paint an incredibly detailed picture of every attendee, by gearing everyone and you can get deep customer insight with predictive analytics.

However, Big Data’s involvement in events will build customer profiles that have log records of the events that they have been, which events they kicked, which ones they will likely favor in the future. With this much potential, the stored data will give you a heads up on your customers better than ever.

Drones and Transportation

With the rise of drones, logistics will become much easier to handle in event industries. However, its scope is not limited at its mobile transportation capacity, Drones can take part in creating Interactive experiences which make for another cutting edge vertical that drone technology has contributed to the event and entertainment sector. 

Also, combining the fun of video gaming experience and drone technology where the users dive into the augmented world Drone interactive is a next-level immersive experience that joins the team.

Many brands are working to combine both VR and drone technology to deliver a better experience. 

The application of Virtual Reality with drones to get accurate snaps and experience a stable live visual experience is found to be a gem that is not at all leveraged at its heights.

More companies are working on developing better VR drones to provide an engaging experience for the users, and it will be utilised for event sectors as well. 

I enables immersive experiences much faster with higher fidelity than ever before. AI promises to empower attendees to interact with those experiences in real-time.

AI personalizes experiences and provides relevant information in real-time. With prodigious processing power AI comes with invaluable abilities beyond content creation.

AI shines in learning from user interaction and reaction to predict the next demand instantaneously. With AI-powered 3D tools and spatial design, you can make it easier to bring stories around your event alive.

Conclusion

Speculations aside, this is an exciting time in the events industry. In the event industry, all these methods of experiences will constitute improving Immersive marketing practices and have become less of a novelty shortly. Further, it will play more integral roles in broader brand marketing campaigns.

The technology of this level will undoubtedly overcome the current limitations faced by event organizers and bring in a landscape that has very different values. The future is far from certain, and yet it looks exciting with the immersive experiences waiting to unfold.

No tags for this post.

Related posts

5 Hacks To Help You Keep Remote Teams Productive and Engaged

Photo by Emma Matthews Digital Content Production on Unsplash

Around the world, many of us are working from home because of coronavirus (Covid-19) spreads. Whether you’re working from home or remotely, it’s an uncertain time. And it is also difficult to stay focused and productive.

Working from home means more distractions which can lead to less productivity. To overcome this challenge, here are five ways you can keep your team productive while they are working from home.

5 Ways To Keep Your Employees Productive While Working From Home

1. Get Your Team Right Productivity Tools 

One of the most important ways to help employees succeed while working from home is to get them tools and software to help them stay connected and productive. These tools may include daily project assignment tools such as Asana, productivity tracker such as Hubstaff, and video conferencing apps such as Skype and Google Hangouts.

Getting your teams with these new tools allows all your employees to stay updated no matter whether they are working from home or working from a remote location. These new tools might also help keep your employees more productive and connected amidst the fear of COVID-19 spread.

2. Encourage Your Employees To Have a Dedicated Work Space

Your employees have never needed dedicated workspace in their homes, but now when they are regularly working from home, you should encourage them to have a dedicated workspace at home separate from a communal space. 

It’s true that your physical surroundings affect your productivity levels at work. Dedicated workspace can help employees to stay free from the distractions at home life. If you have a dedicated room that you can use as a workspace, try to also arrange a desk or use an existing table for work purposes. Avoid working on your bed or couch, as these places are usually reserved for relaxation and not for work time. 

3. Stay Connected With Your Employees During Work Hours

The biggest struggle of working from home every day is the lack of motivation, communication, and negative thoughts, employers should provide emotional support to employees. Leaders should set up regular check-ins, which helps create a workplace where people can still get things done.

Managers should also be available on instant messaging apps and video conferencing apps throughout the day to help employees. Additionally encouraging your employees for self-care is also important to continue on with lives as normally as they can. And you should also tell  them to keep their work and home life separate. Make sure they don’t do errands or watch videos while working from home. And once work hours are over, don’t disturb them into their chill time.

4. Don’t Forget About Team Building Activities  

When your employees are working from home, which is surrounded by other people, it’s often not easier to stay focused on work. When you’re at home no one is assigning you work, you will very quickly learn how undisciplined you are. Developing self-discipline is also important to be productive at your home, but there are a few things that can help.

Creating time and space for workers for example by assigning them daily work, talking about news & updates, and other topics — just as they would have done in the office helps them feel better connected. One way to do this is to make video conferences open for all your workers to catch up. Another is to arrange a virtual team-building exercise to bridge the gap between employees.

5. Promote Small Screen Breaks and Healthy Habits

We all know that sleeping enough, eating healthy, and getting exercise are good for your body and mind. As working from home you’re likely staring at laptop screens and phone screens for hours and hours each day. It’s unavoidable, but try to make your workers take small breaks away from the screen throughout the day.

Leaders should allow workers to take walk breaks outdoors (abiding by laws and regulations in your country or region), as walking around for a few minutes helps think and work a bit better.

This can be helpful if you’re stuck on a business problem, coding errors, and more. Try to avoid eye strain with a white light filter on your laptop or mobile screen. Or you can use mobile apps to make your screen light-tinted slightly yellow at night. For example, you can use the free f.lux app on your laptop.
Want to read more? Be sure to follow my team on LinkedIn

Best wishes, and stay safe out there!

No tags for this post.

Related posts

Keep Track of Model Inventory with Laravel Stock

Laravel Stock is a package by Gijs Jorissen for keeping track of inventory counts on models:

Keep stock for Eloquent models. This package will track stock mutations for your models. You can increase, decrease, clear, and set stock. It’s also possible to check if a model is in stock (on a certain date/time).

For example, let’s say you have a Book model with which you need to keep track of stock:

use Appstract\Stock\HasStock; class Book extends Model
{ use HasStock;
}

When a customer places an order for a book, you can change the stock counts:

$book->increaseStock(10);
$book->decreaseStock(10); // Change stock positively or negatively with one method
$book->mutateStock(10);
$book->mutateStock(-10);

Next, in your UI you could check to see if a product is in stock:

$book->inStock();
// See if you have at least 10 of the same book in stock
$book->inStock(10);

Finally, you can clear stock out:

$book->clearStock(); // Clear stock and then set a new value
$book->clearStock(10);

You can learn more about this package, get full installation instructions, and view the source code on GitHub at appstract/laravel-stock.

Filed in: News / packages

Related posts

Tagged : /

Jenkins Architecture Explained – Beginners Guide

Avatar

Jenkins is an easy to use opensource CI/CD tool. This blog covers the fundamental Jenkins components architecture. If you are a beginner to Jenkins, it will help you gain some idea on how Jenkins components work together and the key configurations involved.

Jenkins Architecture

The following diagram shows the overall architecture of Jenkins.

Jnekins architecture explained

Following are the key components in Jenkins

  1. Jenkins Master Node
  2. Jenkins Slave Node
  3. Jenkins Web Interface

Let look at each component in detail.

Jenkins Master (Server)

Jenkins’s server or master node holds all key configurations.

The following are the key Jenkins master components.

  1. Jenkins Jobs: A job is a collection of steps that you can use to build your source code, test your code, run a shell script, or to run an Ansible role in a remote host. Ther are multiple job types available to support your workflow for continuous integration & continuous delivery.
  2. Jenkins Plugins: Plugins are community-developed modules that you can install in your Jenkins server. It lets you add more functionalities that are not natively available in Jenkins. You can also develop your custom plugins. Check out all plugins from the Jenkins Plugin Index
  3. Jenkins User: Jenkins has its own user database. It can be used for Jenkins’s authentication.
  4. Jenkins Global Security: Jenkins has the following two primary authentication methods.
    1. Jenkins’s own user database:- Set of users maintained by Jenkins’s own database.
    2. LDAP Integration:- Jenkins authentication using corporate LDAP configuration.
  5. Jenkins Credentials: If you want to save any secret information that has to be used in the jobs, you can store it as a credential. All credentials are encrypted by Jenkins.
  6. Jenkins Nodes/Clouds: You can configure multiple slave nodes (Linux/Windows) or clouds (docker, kubernetes) for executing Jenkins jobs.
  7. Jenkins Global Settings (Configure System): Under Jenkins global configuration, you have all the configurations of installed plugins and native Jenkins global configurations. Also, you can configure global environment variables under this section.
  8. Jenkins Logs: Provides logging information on all Jenkins server actions including job logs, plugin logs, webhook logs, etc.

All the configurations for the above-mentioned components will be present as a config file in the Jenkins master node.

Note: Jenkins doesn’t have a database. All Jenkins configurations are stored as flat config files. Mostly xml files.

Jenkins Slave

Jenkins slaves are the worker nodes for the jobs configured in Jenkins server.

Note: You can run jobs in Jenkins server without a Jenkins slave. However, the recommended approach is to have segregated Jenkins Slaves for different job requirements so that you don’t end up messing up the Jenkins server for any system wide configuration changes required for a job.

You can have any number of Jenkins slaves attached to a master with a combination of Windows & Linux servers.

Also, you can restrict jobs to run on specific slaves, depending on the use case. For example, if you have a slave with java 8 configurations, you can assign this slave for jobs that require Java 8 environment.

There is no single standard for using the slaves. You can set up a workflow and strategy based on your project needs.

Jenkins Web Interface

Jenkins 2.0 introduced a very intuitive web interface called “Jenkins Blue Ocean”. It has a good visual representation of all the pipelines.

Jenkins Master-Slave Connectivity

You can connect a Jenkins master and slave in two ways

  1. Using the SSH method: Uses the ssh protocol to connect to the slave. The connection gets initiated from the Jenkins master. Ther should be connectivity over port 22 between master and slave.
  2. Using the JNLP method: Uses java JNLP protocol. In this method, a java agent gets initiated from the slave with Jenkins master details. For this, the master nodes firewall should allow connectivity on specified JNLP port. Typically the port assigned will be 50000. This value is configurable.

There are two types of Jenkins slaves

  1. Slave Nodes: These are servers (Windows/Linux) that will be configured as static slaves. These slaves will be up and running all the time and stay connected to the Jenkins server. Organizations use custom scripts to shut down and restart the slaves when is not used. Typically during nights & weekends.
  2. Slave Clouds: Jenkins Cloud slave is a concept of having dynamic slaves. Means, whenever you trigger a job, a slave will be deployed as a VM/container on demand and gets deleted once the job is completed. This method saves money in terms of infra cost when you have a huge Jenkins ecosystem and continuous builds.

Jenkins Data

All the Jenkins data will be store in the following folder location. Data includes all jobs config files, plugins configs, secrets, node information, etc.

It is very important to back up the Jenkins data folder every day. For some reason, if your Jenkins server data gets corrupt, you can restore whole Jenkins with the data backup.

No tags for this post.

Related posts

Jenkins Tutorial For Beginners – Getting Started Guide

Avatar

Jenkins is the widely adopted open source continuous integration tool. A lot has changed in Jenkins 2.x when compared to the older version. In this Jenkins tutorial series, we will try to cover all the important topics for a beginner to get started with Jenkins.

Jenkins is not just a Continuous Integration tool anymore. It is a Continuous Integration and Continuous delivery tool. You can orchestrate your application deployments using Jenkins in a better way.

In this series of posts, we will be covering various Jenkins tutorials which will help beginners to get started with many Jenkins functionalities.

List of Jenkins Tutorials For Beginners

We will be covering all the important topics in Jenkins 2 in this tutorial series which will get you started with the new core components.

Following is the list to get started with.

  1. Jenkins Architecture Explained
  2. Installing and configuring Jenkins 2.0
  3. Setting up a distributed Jenkins architecture (Master and slaves)
  4. Backing up Jenkins Data and Configurations
  5. Configuring Docker Containers as Build Slaves
  6. Configuring ECS as Build Slave For Jenkins
  7. Setting up Custom UI for Jenkins
  8. Running Jenkins on port 80

ONLINE COURSE: Getting Started With Jenkins 2

jenkins on kubernetes

This course teaches you the latest Jenkins pipeline as code from scratch with all its functionalities to take your code from development to production.

  1. Learn to set up Jenkins 2 and administer it.
  2. Crete build pipelines with pipeline as code
  3. Continuous Integration with Jenkins 2
  4. End to End deployment strategies from development to production.

Note: You can get 10 days free access using the free trial


Jenkins 2.x Features

The following are the key things in Jenkins 2.X.

  1. Pipeline as Code
  2. Shared Libraries
  3. Better UI and UX
  4. Improvements in security and plugins

Pipeline as Code

Jenkins 2.0 has introduced a DSL by which you can version your build, test, deploy pipelines as a code. Pipeline code is wrapped around groovy script which is easy to write and manage. An example pipeline code is shown below.

Using pipeline as a code you can run parallel builds on a single job on different slaves. Also, you have good programmatic control over how and what each Jenkins job should do.

Jenkins Shared Libraries

Jenkins shared library is a great way to reuse the pipeline code. You can create libraries of your CI/CD code which can be referenced in your pipeline script. The extended shared libraries will allow you to write custom groovy code for more flexibility.

Jenkins X

Jenkins X is a project from Jenkins for CI/CD on kubernetes.

Better UI and UX

Jenkins 2.0 has a better User interface. The pipeline design is also great in which the whole flow is visualized. Now you can configure the user, password, and plugins right from the moment you start the Jenkins instance through awesome UI.

Also, Jenkins Blueocean is a great plugin which gives a great view for pipeline jobs. You can even create a pipeline using the blue ocean visual pipeline editor. Blueocen looks like the following.

Jenkins blue ocean

No tags for this post.

Related posts

GitOps puts the power of Git into Ops

By now you’ve probably heard of GitOps and, if so, you may still be wondering what it means. It probably won’t help if I tell you GitOps doesn’t necessarily involve Git (no, really), nor does it require Kubernetes, the orchestration engine with which it’s regularly paired.

Confused much? Well, try this: GitOps is a way to enable a developer-centric experience for managing applications, as Weaveworks, the company that coined the term “GitOps,” might say. Put more bluntly, it’s a way to give developers even more control over their work. Think of it as DevOps on steroids, or DevOps taken to its natural conclusion.

That conclusion? To empower developers to take on a much larger role in the operations of their applications, all while also making the lives of ops professionals significantly better, too.

In the beginning was Git

Linus Torvalds might be best known as the creator of Linux, but Git, the distributed version control system of his invention, is arguably even more important. Torvalds has said that “Git proved I could be more than a one-hit wonder,” but this is an understatement in the extreme. While there were version control systems before Git (e.g., Subversion), Git has revolutionized how developers build software since its introduction in 2005.

Today Git is a “near universal” ingredient of software development, according to studies pulled together by analyst Lawrence Hecht. How “near universal?” Well, Stack Overflow surveys put it at 87 percent in 2018, while JetBrains data has it jumping from 79 percent (2017) to 90 percent (2019) adoption. Because so much code sits in public and (even more in) private Git repositories, we’re in a fantastic position to wrap operations around Git.

To quote Weaveworks CEO Alexis Richardson, “Git is the power option, [and] we would always recommend it if we could, but it is very wrong to say that GitOps requires expertise in Git. Using Git as the UI is not required. Git is the source of truth, not the UI.” Banks, for example, have old repositories sitting in Subversion or Mercurial. Can they do GitOps with these repositories? Yes. In fact, some elements of GitOps began to appear as early as the 2000s.

But for most companies, much of the time, the reliance on Git is what makes GitOps such a fascinating advance on DevOps, and a big, near-term opportunity.

Ops of the Kubernetes kind

Oh, and Kubernetes, too. Why Kubernetes? While different container orchestration engines can be used, Kubernetes is the industry default. According to Weaveworks, GitOps is two things:

  • An operating model for Kubernetes and cloud native. It provides a set of best practices to join up deployment, management, and monitoring for containerized clusters and applications.
  • A path towards a developer-centric experience for managing applications.

Maybe you don’t need Kubernetes for everything, but many organizations are turning to it as an essential aspect of how they deploy software. And yet far too many are flying blind on their Kubernetes clusters. How so? According to Cornelia Davis, CTO at Weaveworks, while IT has had various tools in place (e.g., config management, discovery, etc.) to try to track what was going on within and across systems, to a large extent much wasn’t known, which is why patch management has proved so hard.

“Moving to Kubernetes didn’t make IT magically less blind,” Davis says. “They carried the problems they already had forward into Kubernetes.”

Or as Richardson put it, “How do you know if you update Kubernetes that it’s in the correct state? Do you get told if it’s in the wrong state? The answer is no. [Developers] have no idea. They are flying blind.” As such, Kubernetes users are “frozen” because they’re stuck with clusters they’re scared to update. 

Enter GitOps.

A true DevOps (GitOps!) experience

The GitOps model cures such Kubernetes paralysis, without requiring developers to become Kubernetes gurus. It does so by using automated directors to deploy changes to a Kubernetes cluster to bring it back into line with a declarative model of the desired state, according to Richardson:

What if everything in the cluster were updated via a model? If you install some agents into your cluster that look at current state and compare it to the model, you can then make changes to force it to conform to the model. You’re not updating them directly – you’re updating the models. Along the way you get continuous integration, progressive delivery, etc.

Weaveworks describes this in more detail but I also like Redmonk analyst James Governor’s summary:

  • Provisioning of AWS resources and deployment of Kubernetes is declarative.
  • The entire system state is under version control and described in a single Git repository.
  • Operational changes are made by pull request (plus build and release pipelines).
  • Diff tools detect any divergence and notify you via Slack alerts; and sync tools enable convergence.
  • Rollback and audit logs are also provided via Git.

It’s an approach that sounds great for developers and the application teams they may serve. But what about the platform engineering team, the folks we often call “ops” and who have particular responsibility for security, compliance, cost management, and more?

For ops, GitOps drives huge value through repeatability. Need to bring an availability zone back up? The platform engineering team knows that because everything is modeled, they can just run the reconcilers to bring things back in line with the models. Coupled with Git’s capability to revert/rollback and fork, the ops team gains stable and reproducible rollbacks, not to mention Git’s security benefits and more.

GitOps, in short, just might be what DevOps has long aspired to be: a great way for developers to take on more of the operational burden for their apps, even as platform engineering (ops) is better able to tackle their roles.

No tags for this post.

Related posts

9 offbeat databases worth a look

By and large, if you need a database, you can reach for one of the big names—MySQL/MariaDB, PostgreSQL, SQLite, MongoDB—and get to work. But sometimes the one-size-fits-all approach doesn’t fit all. Every now and then your use case falls down between barstools, and you need to reach for something more specialized. Here are nine offbeat databases that run the gamut from in-memory analytics to key-value stores and time-series systems.

DuckDB

The phrase “SQL OLAP system” generally conjures images of data-crunching monoliths or sprawling data warehouse clusters. DuckDB is to analytical databases what SQLlite is to MySQL and PostgreSQL. It isn’t designed to run at the same scale as full-blown OLAP solutions, but to provide fast, in-memory analytical processing for local datasets.

Many of DuckDB’s features are counterparts to what’s found in bigger OLAP products, even if smaller in scale. Data is stored as columns rather than rows, and query processing is vectorized to make the best use of CPU caching. You won’t find much in the way of native connectivity to reporting solutions like Tableau, but it shouldn’t be difficult to roll such a solution manually. Aside from bindings for C++, DuckDB also connects natively to two of the most common programming environments for analytics, Python and R.

EdgeDB

“Edge” is a term used in graph databases to refer to the connection or relationship between two entities or nodes (such as between a customer and an order, or between an order and a product, etc.) of a highly connected dataset. EdgeDB uses the PostgreSQL core and all the properties it provides (like ACID transactions and industrial-strength reliability) to build what its makers call an “object-relational database” with strong field types and a SQL-like query language. 

Thus EdgeDB combines NoSQL-like ease of use and immediacy, the relational modeling power of a graph database, and the guarantees and consistency of SQL. Even though EdgeDB is not formally a document database, you can use it to store data that way. And you can use the GraphQL query language to easily retrieve data from EdgeDB, just as you can with native graph databases such as Neo4j.

FoundationDB

An open source project spearheaded by Apple, FoundationDB is a “multi-model” database that stores data internally as key-value pairs (essentially the NoSQL model), but can be organized into relational tables, graphs, documents, and many other data structures. ACID transactions guarantee data integrity, and horizontal scaling and replication are both available out of the box. FoundationDB’s design comes with some stiff restrictions, though: keys, values, and transactions all have hard size limits, and transactions have hard time limits as well.

HarperDB

The goal behind HarperDB is to provide a single database for handling structured and unstructured data in an enterprise—somewhere between a multi-model database like FoundationDB and a data warehouse or OLAP solution. Ingested data is deduplicated and made available for queries through the interface of your choice: SQL, NoSQL, Excel, etc. BI solutions like Tableau or Power BI can integrate directly with HarperDB without the data needing to be extracted or processed. Both enterprise and community editions are available.

[ Also on InfoWorld: How to choose the right database for your application ]

KeyDB

As popular and powerful as Redis is, the in-memory key-value store has been criticized for falling short in threaded performance and ease of use. KeyDB is protocol-compatible with Redis, so can be used as a drop-in replacement. But KeyDB adds some nifty under-the-hood improvements, chiefly multithreading for network I/O operations and query parsing. Plans for the next edition of Redis, Redis 6, include threaded I/O as well, but KeyDB is available now.

M3DB

A product of Uber’s internal engineering team, M3DB is a distributed time-series database that is used in Uber’s metrics platform (essentially as a data store for Prometheus). Borrowing ideas from Apache Cassandra and a Facebook project named “Gorilla,” M3DB allows arbitrary time precision, out-of-order insertions, and configurable levels of replication and read consistency. However, the creators note that M3DB might not be suitable for all time-series database use cases. For instance, M3DB can’t insert data out of order beyond a given time window (the default is two hours), and it is mainly optimized for storing and retrieving 64-bit floats rather than other kinds of data.

RediSQL

The name implies a fusion of the Redis in-memory key-value store and SQL query capabilities, and that’s exactly what RediSQL is — specifically, a Redis module that embeds a SQLite database. Data is stored transparently in Redis, so Redis handles persistency and in-memory processing. Each database is associated with a Redis key, so you can have multiple SQL databases on a single Redis instance. Queries to those databases are standard SQL, passed via the standard Redis API. You can also create and precompile statements (essentially stored procedures) in RediSQL to speed up query execution. Both commercial and open source editions are available.

RQLite

SQLite is a little miracle: an embeddable open source database that is lightning-fast and ultra-reliable. SQLite makes a great default choice whenever you need a database in a single-user application, but SQLite instances are limited to a single node.

RQLite builds on SQLite to create a distributed database system. Setting up multiple nodes is easy, and data automatically replicates across those nodes using the Raft consensus algorithm. RQLite also provides encryption between nodes and a discovery service that makes it easy to add nodes automatically. But RQLite also has a few drawbacks: Write speeds are slower than in SQLite, and only deterministic SQL functions—i.e., those guaranteed to produce the same result on every node—are safe to use.

UmbraDB

Most high-end databases these days have some kind of in-memory functionality, even if it involves something like table pinning (e.g., SQL Server). UmbraDB, an analytics database that can run as a drop-in replacement for PostgreSQL, is designed to use in-memory processing whenever it can. When it can’t, it uses a novel variable-size page mechanism for paging data from storage. Long-running queries are optimized for execution with LLVM

No tags for this post.

Related posts

GitOps brings the power of Git into Ops

By now you’ve probably heard of GitOps and, if so, you may still be wondering what it means. It probably won’t help if I tell you GitOps doesn’t necessarily involve Git (no, really), nor does it require Kubernetes, the orchestration engine with which it’s regularly paired.

Confused much? Well, try this: GitOps is a way to enable a developer-centric experience for managing applications, as Weaveworks, the company that coined the term “GitOps,” might say. Put more bluntly, it’s a way to give developers even more control over their work. Think of it as DevOps on steroids, or DevOps taken to its natural conclusion.

That conclusion? To empower developers to take on a much larger role in the operations of their applications, all while also making the lives of ops professionals significantly better, too.

In the beginning was Git

Linus Torvalds might be best known as the creator of Linux, but Git, the distributed version control system of his invention, is arguably even more important. Torvalds has said that “Git proved I could be more than a one-hit wonder,” but this is an understatement in the extreme. While there were version control systems before Git (e.g., Subversion), Git has revolutionized how developers build software since its introduction in 2005.

Today Git is a “near universal” ingredient of software development, according to studies pulled together by analyst Lawrence Hecht. How “near universal?” Well, Stack Overflow surveys put it at 87 percent in 2018, while JetBrains data has it jumping from 79 percent (2017) to 90 percent (2019) adoption. Because so much code sits in public and (even more in) private Git repositories, we’re in a fantastic position to wrap operations around Git.

To quote Weaveworks CEO Alexis Richardson, “Git is the power option, [and] we would always recommend it if we could, but it is very wrong to say that GitOps requires expertise in Git. Using Git as the UI is not required. Git is the source of truth, not the UI.” Banks, for example, have old repositories sitting in Subversion or Mercurial. Can they do GitOps with these repositories? Yes. In fact, some elements of GitOps began to appear as early as the 2000s.

But for most companies, much of the time, the reliance on Git is what makes GitOps such a fascinating advance on DevOps, and a big, near-term opportunity.

Ops of the Kubernetes kind

Oh, and Kubernetes, too. Why Kubernetes? While different container orchestration engines can be used, Kubernetes is the industry default. According to Weaveworks, GitOps is two things:

  • An operating model for Kubernetes and cloud native. It provides a set of best practices to join up deployment, management, and monitoring for containerized clusters and applications.
  • A path towards a developer-centric experience for managing applications.

Maybe you don’t need Kubernetes for everything, but many organizations are turning to it as an essential aspect of how they deploy software. And yet far too many are flying blind on their Kubernetes clusters. How so? According to Cornelia Davis, CTO at Weaveworks, while IT has had various tools in place (e.g., config management, discovery, etc.) to try to track what was going on within and across systems, to a large extent much wasn’t known, which is why patch management has proved so hard.

“Moving to Kubernetes didn’t make IT magically less blind,” Davis says. “They carried the problems they already had forward into Kubernetes.”

Or as Richardson put it, “How do you know if you update Kubernetes that it’s in the correct state? Do you get told if it’s in the wrong state? The answer is no. [Developers] have no idea. They are flying blind.” As such, Kubernetes users are “frozen” because they’re stuck with clusters they’re scared to update. 

Enter GitOps.

A true DevOps (GitOps!) experience

The GitOps model cures such Kubernetes paralysis, without requiring developers to become Kubernetes gurus. It does so by using automated directors to deploy changes to a Kubernetes cluster to bring it back into line with a declarative model of the desired state, according to Richardson:

What if everything in the cluster were updated via a model? If you install some agents into your cluster that look at current state and compare it to the model, you can then make changes to force it to conform to the model. You’re not updating them directly – you’re updating the models. Along the way you get continuous integration, progressive delivery, etc.

Weaveworks describes this in more detail but I also like Redmonk analyst James Governor’s summary:

  • Provisioning of AWS resources and deployment of Kubernetes is declarative.
  • The entire system state is under version control and described in a single Git repository.
  • Operational changes are made by pull request (plus build and release pipelines).
  • Diff tools detect any divergence and notify you via Slack alerts; and sync tools enable convergence.
  • Rollback and audit logs are also provided via Git.

It’s an approach that sounds great for developers and the application teams they may serve. But what about the platform engineering team, the folks we often call “ops” and who have particular responsibility for security, compliance, cost management, and more?

For ops, GitOps drives huge value through repeatability. Need to bring an availability zone back up? The platform engineering team knows that because everything is modeled, they can just run the reconcilers to bring things back in line with the models. Coupled with Git’s capability to revert/rollback and fork, the ops team gains stable and reproducible rollbacks, not to mention Git’s security benefits and more.

GitOps, in short, just might be what DevOps has long aspired to be: a great way for developers to take on more of the operational burden for their apps, even as platform engineering (ops) is better able to tackle their roles.

No tags for this post.

Related posts