Bodhi Linux 5.1.0 Released based on Ubuntu 18.04.4 LTS

A new version of Bodhi Linux is available to download based on the recent Ubuntu 18.04.4 point release.

While Bodhi Linux isn’t a so-called headline distro it has gained a solid following over the years thanks to its combination of low system resource requirements and solid performance with the quirky Moksha desktop environment and popular lightweight desktop apps.

And truth be told I have a bit of a soft spot for it, too. I like distros that ‘do things differently’ and, amidst a a sea of pale Ubuntu spins sporting minor cosmetic changes, Bodhi Linux does just that.

Bodhi Linux Screenshot

Bodhi 5.1.0

Bodhi Linux 5.1.0 is the first major update to the distro in almost two years, and succeeds the Bodhi 5.0 release back in 2018.

The update, aside from being based on the recent Ubuntu 18.04.4 LTS release and HWE, makes some software substitutions. The ePad text editor is replaced with the lightweight Leafpad. Likewise, the Midori web browser is supplanted by Epiphany (aka GNOME Web).

To help promote the new release Bodhi devs have put together the following video ‘trailer’, which you can view below if your browser supports video embeds:

[embedded content]

Bodhi Linux runs well on low-end machines (though not exclusively; it’s perfectly usable for gaming rigs too). If you’re minded to give an old Celeron-powered netbook a new purpose then a Bodhi install wouldn’t be a bad way to go about it.

Fair warning though: the Moksha desktop environment, which is based on Enlightenment libraries, is not for everyone. The modular nature of Moksha means it works rather differently to vertically-integrated DEs like GNOME Shell and KDE Plasma.

But different isn’t necessarily bad.

You can learn more about the Bodhi Linux 5.1 release on the distro’s official blog. To download the very latest release as a 64-bit .iSO hit the button below, or grab the official torrent:

Download Bodhi Linux 5.1.0 (64-bit .iso)

If you have a 32-bit only machine you can download and use the Bodhi Linux 5.1 legacy release. This features Linux kernel 4.9 and no PAE extension:

Download Bodhi Linux 5.1.0 (32-bit .iso)

Download News

Related posts

Tagged : / / /

Tricks for getting around your Linux file system

Whether you’re moving around the file system, looking for files or trying to move into important directories, Linux can provide a lot of help. In this post, we’ll look at a number of tricks to make moving around the file system and both finding and using commands that you need a little easier.

Adding to your $PATH

One of the easiest and most useful ways to ensure that you don’t have to invest a lot of time into finding commands on a Linux system is to add the proper directories to your $PATH variable. The order of directories that you add to your $PATH variable is, however, very important. They determine the order in which the system will look through the directories to find the command to run — stopping when it finds the first match.

You might, for example, want to put your home directory first so that, if you create a script that has the same name as some other executable, it will be the one that you end up running whenever you type its name.

To add your home directory to your $PATH variable, you could do this:

$ export PATH=~:$PATH

The ~ character represents your home directory.

If you keep your scripts in your bin directory, this would work for you:

$ export PATH=~/bin:$PATH

You can then run a script located in your home directory like this:

$ myscript
Good morning, you just ran /home/myacct/bin/myscript

IMPORTANT: The commands shown above add to your search path because $PATH (the current path) is included. They don’t override it. Your search path should be configured in your .bashrc file, and any changes you intend to be permanent should be added there as well.

Using symbolic links

Symbolic links provide an easy and obvious way to record the location of directories that you might need to use often. If you manage content for a web site, for example, you might want to get your account to “remember” where the web files are located by creating a link like this:

ln -s /var/www/html www

The order of the arguments is critical. The first (/var/www/html) is the target and the second is the name of the link that you will be creating. If you’re not currently located in your home directory, the following command would do the same thing:

ln -s /var/www/html ~/www

After setting this up, you can use “cd www” to get to /var/www/html.

Using shopt

The shopt command also provides a way to make moving to a different directory a bit easier. When you employ shopt’s autocd option, you can go to a directory simply by typing its name. For example:

$ shopt -s autocd
$ www
cd -- www
/home/myacct/www $ pwd -P /var/www/html

$ ~/bin
cd -- /home/myacct/bin
$ pwd
/home/myacct/bin

In the first set of commands above, the shopt command’s autocd option is enabled. Typing www then invokes a “cd www” command. Because this symbolic link was created in one of the ln command examples above, this moves us to /var/www/html. The pwd -P command displays the actual location.

In the second set, typing ~/bin invokes a cd into the bin directory in the user’s home.

Note that the autocd behavior will not kick in when what you type is a command –  even if it’s also the name of a directory.

The shopt command is a bash builtin and has a lot of options. This one just means that you don’t have to type “cd” before the name of each directory you want to move into.

To see shopt‘s other options, just type “shopt”.

Using $CDPATH

Probably one of the most useful tricks for moving into particular directories is adding the paths that you want to be able to move into easily to your $CDPATH. This creates a list of directories that will be moved into by typing only a portion of the full path names.

There is one aspect of this that may be just a little tricky. Your $CDPATH needs to include the directories that contain the directories that you want to move into, not the directories themselves.

For example, say that you want to be able to move into the /var/www/html directory simply by typing “cd html” and into subdirectories in /var/log using only “cd” and the simple directory names. In this case, this $CDPATH would work:

$ CDPATH=.:/var/log:/var/www

Here’s what you would see:

$ cd journal
/var/log/journal
$ cd html
/var/www/html

Your $CDPATH kicks in when what you type is not a full path. Then it looks down its list of directories in order to see if the directory you identified exists in one of them. Once it finds a match, it takes you there.

Keeping the “.” at the beginning of your $CDPATH means that you can move into local directories without having to have them defined in the $CDPATH.

$ export CDPATH=".:$CDPATH"
$ Videos
cd -- Videos
/home/myacct/Videos

It’s not hard to move around the Linux file system, but you can save a few brain cells if you use some handy tricks for getting to various locations easily.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
No tags for this post.

Related posts

Running Linux Applications as Unikernels with K8S

If you’ve read some of my prior articles you might’ve thought I’d never write this one huh? 🙂 Well here goes.

A common question that we get is “Can you use unikernels with K8S?” The answer is yes, however, there are caveats. Namely, unikernels come packaged as virtual machines and in many cases k8s is provisioned on the public cloud on top of virtual machines. Also, you should be aware that provisioning unikernels under k8s incurs security risks that you would otherwise not need to deal with. These are greatly diminished as the guests are unikernels, not linux guests, but still.

Now, if you have your own servers or you are running k8s on bare metal this is how you’d go about running Nanos unikernels under k8s.

For this article you need a real physical machine and OPS. While you can use nested virtualization I wouldn’t because you are going to take a pretty significant performance hit. Google Cloud has this feature on some of their instances and if you are on Amazon you might be able to perform this example on the “metal” instances (I haven’t checked), although, keep in mind both of these options will not be cheap compared to simply spinning up a t2 nano or micro instance of which you can do easily with unikernels.

We are going to run a Go unikernel for this example but you can use any OPS example to follow along. Here we have a simple go webserver that sits on port 8083:

package main import ( "fmt" "net/http"
) func main() { http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Welcome to my website!") }) fs := http.FileServer(http.Dir("static/")) http.Handle("/static/", http.StripPrefix("/static/", fs)) http.ListenAndServe(":8083", nil)
}

Ok – looks good. We can quickly build the image and ensure everything is working alright like so. We are using the ‘nightly’ build option here:

ops run -n -p 8083 goweb
Ops build works here as well but run will autorun it for you to ensure it works locally first. Now we’ll need to put it into a format for k8s to use. First, we compress it with XZ (sudo apt-get install xz-utils):
cp .ops/images/goweb.img .
xz goweb.img

From there we need to put it into a place for k8s to import it. I tossed it into a cloud bucket and to keep this article as simple as possible have left it open. (Obviously, you don’t want to do this in a real life production scenario.)

 curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl chmod +x ./kubectl mv kubectl /usr/local/bin/. sudo mv kubectl /usr/local/bin/. kubectl version --client
Now let’s install minikube. I’m using minikube here to hopefully minimize the number of steps you need to do from a fresh install but feel free to use whatever you want.
 curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube minikube start --vm-driver=kvm2

Then install the kvm2 driver. For this box I needed to install the libvirt suite of tooling:

sudo apt-get install libvirt-daemon-system libvirt-clients bridge-utils

Libvirt is this rather old and nasty library used to interact w/KVM although it is a ton of integrations and there aren’t that many alternatives.

If you are having trouble after this step you can run this quick validation check to ensure everything is setup:

virt-host-validate

Also, ensure you are in the right group to interact with KVM:

After getting all of this installed you might find the need to reset your session (quickest way is to just logout/login again).

Next up – let’s install the kubevirt operator. This is what really ties the room together.
 export KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs) echo $KUBEVIRT_VERSION kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml

Then let’s create a resource:

 kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml

Now let’s install virtctl. Are we getting tired yet?

 curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64 chmod +x virtctl

Then we’ll import with CDI.

 wget https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/storage-setup.yml kubectl create -f storage-setup.yml export VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*") kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml kubectl get pods -n cdi

Ok! Whooh! If you got through all of that we are almost to the finish line. Let’s grab a template for our persistent volume claim:

 wget https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/pvc_fedora.yml 

Now, edit the line to show where you stuffed the original disk image. In my example it looks like this (again this is just an example to keep things easy – you wouldn’t/shouldn’t do this in real life):

 cdi.kubevirt.io/storage.import.endpoint: "https://storage.googleapis.com/totally-insecure/goweb.img.xz"

Let’s create it:

kubectl create -f pvc_fedora.yml
kubectl get pvc fedora -o yaml

You can check out the import as it happens but wait until you see the success message:

 cdi.kubevirt.io/storage.pod.phase: Succeeded

Now we can create the actual vm:

wget https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/vm1_pvc.yml kubectl create -f vm1_pvc.yml

Now if you:

kubectl get vmi

You should see your instance running.

If you have minikube you can now do this:

Wow! We just deployed a unikernel to K8S. Easy? Well, I’ll let you decide that.

Of course, if you are using the public cloud like AWS or GCP and you don’t want to have to go through all the hassle these 2 commands will get the same webserver deployed just as easily with a lot less hassle, more security and more performance with less waste:

ops image create -c config.json -a goweb
ops instance create -z us-west2-a -i goweb-image

Until next time.

No tags for this post.

Related posts

Linux firewall basics with ufw

The ufw (uncomplicated firewall) represents a serious simplification to iptables and, in the years that it’s been available, has become the default firewall on systems such as Ubuntu and Debian. And, yes, ufw is surprisingly uncomplicated – a boon for newer admins who might otherwise have to invest a lot of time to get up to speed on firewall management.

GUIs are available for ufw (like gufw), but ufw commands are generally issued on the command line. This post examines some commands for using ufw and looks into how it works.

First, one quick way to see how ufw is configured is to look at its configuration file – /etc/default/ufw. In the command below, we display the settings, using grep to suppress the display of both blank lines and comments (line starting with #).

$ grep -v '^#\|^$' /etc/default/ufw
IPV6=yes
DEFAULT_INPUT_POLICY="DROP"
DEFAULT_OUTPUT_POLICY="ACCEPT"
DEFAULT_FORWARD_POLICY="DROP"
DEFAULT_APPLICATION_POLICY="SKIP"
MANAGE_BUILTINS=no
IPT_SYSCTL=/etc/ufw/sysctl.conf
IPT_MODULES="nf_conntrack_ftp nf_nat_ftp nf_conntrack_netbios_ns"

As you can see, the default policy is to drop input and allow output. Additional rules that allow the connections that you specifically want to be accept are configured separately.

The basic syntax for ufw commands might look like thee below, though this synopsis is not meant to imply that typing only “ufw” will get you further than a quick error telling you that arguments are required.

ufw [--dry-run] [options] [rule syntax]

The –dry-run option means that ufw won’t run the command you specify, but will show you the results that you would see if it did. It will, however, display the entire set of rules as they would exist if the change were made, so be prepared for more than a few lines of output.

To check the status of ufw, run a command like the following. Note that even this command requires use of sudo or use of the root account.

$ sudo ufw status
Status: active To Action From
-- ------ ----
22 ALLOW 192.168.0.0/24
9090 ALLOW Anywhere
9090 (v6) ALLOW Anywhere (v6)

Otherwise, you will see something like this:

$ ufw status
ERROR: You need to be root to run this script

Adding “verbose” provides a few additional details:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip To Action From
-- ------ ----
22 ALLOW IN 192.168.0.0/24
9090 ALLOW IN Anywhere
9090 (v6) ALLOW IN Anywhere (v6)

You can easily allow and deny connections by port number with commands like these:

$ sudo ufw allow 80 <== allow http access
$ sudo ufw deny 25 <== deny smtp access

You can check out the /etc/services file to find the connections between port numbers and service names.

$ grep 80/ /etc/services
http 80/tcp www # WorldWideWeb HTTP
socks 1080/tcp # socks proxy server
socks 1080/udp
http-alt 8080/tcp webcache # WWW caching service
http-alt 8080/udp
amanda 10080/tcp # amanda backup services
amanda 10080/udp
canna 5680/tcp # cannaserver 

Alternately, you can use service names like in these commands.

$ sudo ufw allow http
Rule added
Rule added (v6)
$ sudo ufw allow https
Rule added
Rule added (v6)

After making changes, you should check the status again to see that those changes have been made:

$ sudo ufw status
Status: active To Action From
-- ------ ----
22 ALLOW 192.168.0.0/24
9090 ALLOW Anywhere
80/tcp ALLOW Anywhere <==
443/tcp ALLOW Anywhere <==
9090 (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6) <==
443/tcp (v6) ALLOW Anywhere (v6) <==

The rules that ufw follows are stored in the /etc/ufw directory. Note that you need root access to view these files and that each contains a large number of rules.

$ ls -ltr /etc/ufw
total 48
-rw-r--r-- 1 root root 1391 Aug 15 2017 sysctl.conf
-rw-r----- 1 root root 1004 Aug 17 2017 after.rules
-rw-r----- 1 root root 915 Aug 17 2017 after6.rules
-rw-r----- 1 root root 1130 Jan 5 2018 before.init
-rw-r----- 1 root root 1126 Jan 5 2018 after.init
-rw-r----- 1 root root 2537 Mar 25 2019 before.rules
-rw-r----- 1 root root 6700 Mar 25 2019 before6.rules
drwxr-xr-x 3 root root 4096 Nov 12 08:21 applications.d
-rw-r--r-- 1 root root 313 Mar 18 17:30 ufw.conf
-rw-r----- 1 root root 1711 Mar 19 10:42 user.rules
-rw-r----- 1 root root 1530 Mar 19 10:42 user6.rules

The changes made earlier in this post (the addition of port 80 for http access and 443 for https (encrypted http) access will look like this in the user.rules and user6.rules files:

# grep " 80 " user*.rules
user6.rules:### tuple ### allow tcp 80 ::/0 any ::/0 in
user6.rules:-A ufw6-user-input -p tcp --dport 80 -j ACCEPT
user.rules:### tuple ### allow tcp 80 0.0.0.0/0 any 0.0.0.0/0 in
user.rules:-A ufw-user-input -p tcp --dport 80 -j ACCEPT
You have new mail in /var/mail/root
# grep 443 user*.rules
user6.rules:### tuple ### allow tcp 443 ::/0 any ::/0 in
user6.rules:-A ufw6-user-input -p tcp --dport 443 -j ACCEPT
user.rules:### tuple ### allow tcp 443 0.0.0.0/0 any 0.0.0.0/0 in
user.rules:-A ufw-user-input -p tcp --dport 443 -j ACCEPT

With ufw, you can also easily block connections from a system using a command like this:

$ sudo ufw deny from 208.176.0.50
Rule added

The status command will show the change:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip To Action From
-- ------ ----
22 ALLOW IN 192.168.0.0/24
9090 ALLOW IN Anywhere
80/tcp ALLOW IN Anywhere
443/tcp ALLOW IN Anywhere
Anywhere DENY IN 208.176.0.50 <== new
9090 (v6) ALLOW IN Anywhere (v6)
80/tcp (v6) ALLOW IN Anywhere (v6)
443/tcp (v6) ALLOW IN Anywhere (v6)

All in all, ufw is both easy to configure and easy to understand.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
No tags for this post.

Related posts

PostgreSQL diff Explained

Photo by Markus Spiske on Unsplash

Normal development flow requires continuous patching the production database with local changes normally made automatically by the orm software, this method is not perect but deceptively simple, all we’ll use is standard Unix commands, and is good enough for us.

You will not see a database name or user name because we’ll use the convention to use an operating system account owning the database in local and production and both databases local and remote are named as the user, all this with automatic ssh login in remote.

The diff command and pg_dump

The first aproach is to compare schemas, suppose you have your local dev version in localhost and the production version in other server called remote-server.

1. Obtain backup of db structure in production server

ssh remote-server pg_dump -s -Ox > remote1.sql

Note: We obtain only the schema (-s) without any permissions and grant sql comands (-Ox).

2. Obtain backup of local database

pg_dump -s -Ox > local.sql

3. Compare the two schemas

Nexts command compare the two files and filter the lines needed in remote to be like local using standard unix filters grep and sed .

diff remote2.sql local.sql | grep '^>' | sed 's/^> //' 

Difference in database versions

If the Postgres versions are different, chances are that will be a lot of differences not essential in nature, but in syntax, i.e, schema prefix usages.

To be bullet proof we should make this comparison using the same version of postgres, for this you should create a local database with the remote schema and then make the comparison:

createdb temp1
psql -f remote1.sql temp1
pg_dump -s -Ox temp1 > remote2.sql

Now, compare again schemas same server version

diff remote2.sql local.sql 

Again to show the sql needed to patch remote to be like local use:

diff remote2.sql local.sql | grep '^>' | sed 's/^> //'

The only catch is if there are only field differences the sql generated could be invalid, so check it out before proceed.

Finally, this script makes all

#!/bin/bash
# Compare a remote database with its local counterpart
if [ $# -ne 1 ]
then echo Please give remote server arg echo Usage: postgres-diff remote-server-addr exit -1
fi
SERVER=$1
TEMPDB="tempdb-$RANDOM"
REMOTE1="/tmp/r1-$RANDOM"
REMOTE2="/tmp/r2-$RANDOM"
LOCAL="/tmp/loc-$RANDOM"
echo Geeting schema from remote server
ssh $SERVER pg_dump -s -Ox > $REMOTE1.sql
echo Geeting schema from local server
pg_dump -s -Ox > $LOCAL.sql
echo Restoring remote in local
createdb $TEMPDB
psql -f $REMOTE1.sql $TEMPDB > /dev/null echo Getting diffs, this SQL could be invalid
pg_dump -s -Ox $TEMPDB > $REMOTE2.sql
diff $REMOTE2.sql $LOCAL.sql | grep '^>' | sed 's/^> //'
dropdb $TEMPDB
No tags for this post.

Related posts

Torrent app? Defrag tool? Docker assistant?! No, this is Ubuntu’s new install icon

Once you know that the icon on the right is supposed to represent Ubiquity, the Ubuntu installer, it kinda works as a motif.

I say ‘kinda’ because if I take my eyes off the icon for a few seconds, then look back at it, it stops making sense again.

But new Ubuntu users, those booting up a live image to “try Ubuntu without installing”, will be tasked with understanding that this icon represents the app they need to use to put Ubuntu on their disk.

It does not represent a torrent app. Or a defrag tool. Nor is it a handy container set-up wizard. And no, it has nothing to do with ‘Snap’ apps or any other isolated packaging formats…

Also: is Ubuntu teleporting in from the ether or disintegrating Thanos style into nothingness? I can’t quite tell…

Now, I’m just an Ubuntu use; I am not a designer (have you seen the thumbnails I make). To put it bluntly: my opinion doesn’t matter on this.

But I have a hunch that I won’t be the only one left scratching their chin in bafflement at this particular icon. It doesn’t scream out at me what it does (which is kinda important).

Clearly the Ubuntu design team disagree.

I assume they have hit some usability issues with the traditional “downward arrow icon on disk” arrangement (cos otherwise they wouldn’t be changing it all, right?).

And I’m not saying it’s a bad icon in general, it just feels like a bad icon for this particular task.

I also wonder if an LTS release is the best place to experiment with non-conventional “install” metaphors. Though not trending the old ‘downward arrow on disk’ tends to be universally understood.

Me? I can’t tell if this Ubuntu icon is coming or going — but I kinda hope it’s the latter (if you catch my drift).

Related posts

Tagged :

Waveform Free is a Professional Digital Audio Workstation with Linux Support

If you like to make music on Linux then you’ve no doubt heard about Tracktion T7 DAW, considered one of the best free digital audio workstations around.

But that’s changing. A new, free replacement for Traction T7 DAW has been announced — and it looks mighty impressive!

First a bit of background. New versions of Tracktion for Windows, macOS and Linux are typically released as paid, closed source software.

But every few years the company behind the pro audio tool releases an older version entirely for free (as in beer), with no functional limitations at all.

The “catch” is that while it’s free to download it is not not open source software. And you don’t get access to email support (thankfully there’s a great community forum that stands in for that).

The most recent “free” version is Tracktion 7. This is an incredibly powerful music production platform that gives all the tools you need to record and produce music on Linux, complete with support for third-party virtual effects and instruments (VSTs).

Whether you want to create and edit podcasts, record, edit and mix band recordings, create electronic music using plugins, VSTs and other audio processing effects, or process audio for use in a video — this tool is up to the job.

But in a break with expectation the next free version of the software won’t be Tracktion T8 (or even Tracktion 10) as expected.

Instead, it’ll be a brand new version: Tracktion Waveform.

Tracktion Waveform Free

[embedded content]

In an unusual (but hey: not complaining) move Tracktion has decided to release a free version of their latest software, Waveform 11.

Waveform 11 Free is pitched as the free (as in beer) replacement for T7 DAW. Indeed, links to download Tracktion T7 now redirect to the Waveform landing page.

This switch means audio enthusiasts no longer need to wait for an ‘older’ release to be gifted as the current version (with improved UI, better performance, multi-monitor support, new samplers and tools, and even a VST sandbox – no more crashing the entire app when an instrument goes rogue) is available gratis.

“Waveform Free […] has more capabilities than most enthusiast producers will ever need. There are no restrictions whatsoever – unlimited track count, add popular plugins and enjoy the deeply capable feature set,” reads the official blurb.

But naturally there is a catch.

Tracktion Waveform Free Has Limitations

A raft of functionality is clipped in the free version — enough to clip the creative capabilities that might otherwise be available.

For instance, there’s no Track Editor; no Arranger Track; no quick actions; and no layout customisations permitted.

Mastering is less viable as the powerful Master Mix DSP is kept firmly behind the paid-only curtain, as are a bunch of other useful DSTs, samplers, and drum machines.

The upside is that you might not miss this functionality unless you’re tackling advanced tasks or a professional editor. There is an in-app option in Waveform Free to pay $69 to unlock the features should you really need them.

Download Tracktion Waveform Free for Linux

Overall this is an interesting new approach for what is a well regarded piece software. It’s especially nice that a Linux version of both free and pro builds will continue to be made available.

You can download Waveform Free for Windows, macOS and Linux directly from the Tracktion website:

Download page for Tracktion Waveform Free

Although free, to download Waveform Free you need a valid e-mail address and have to accept a privacy policy.

I also found that trying to download the app kept resulting in a server error. A workaround is create an account here, log in, then access the download page. Enter all your details and bam: access.

The Linux build is provided as a 64-bit .deb installer roughly 50MB in size.

While I don’t use digital audio workstation (DAW) software myself (thus I can’t offer too much insight into whether this switch will a bad thing per se) I have to say: a newer code base, with better OS support, improved UI and performance, PLUS a tonne of new tools and features …all for free… doesn’t sound too bad, does it?

Related posts

Tagged : / / / / /

How To Use Command Line

💻 Command Line:

The command line is a tool or interface for interacting with a computer using only text rather than the mouse.

Command-Line is an essential tool for software development. We can execute a wide variety of programs on our computer through this. In this tutorial, we are going to learn the necessary UNIX commands for development.

UNIX command is a type of command that is used in LINUX and macOS.

To run some basic UNIX command in windows you can download Git Command Line Interface from Git SCM.

🔰 Getting Started:

✨ Let’s start to learn (can use it as a reference guide) …

✔ Check the Current Directory ➡ 

pwd

:

On the command line, it’s important to know the directory we are currently working on. For that, we can use 

pwd

 command.

It shows that I’m working on my Desktop directory.

✔ Display List of Files ➡ 

ls

:

To see the list of files and directories in the current directory use 

ls

 command in your CLI.

Shows all of my files and directories of my Desktop directory.

• To show the contents of a directory pass the directory name to the 

ls

 command i.e. 

ls directory_name

.

• Some useful 

ls

 command options:-

  1. ls -a
    

    : list all files including hidden file starting with ‘.’

  2. ls -l
    

    : list with the long format

  3. ls -la
    

    : list long format including hidden files

✔ Create a Directory ➡ 

mkdir

:

We can create a new folder using the 

mkdir

 command. To use it type 

mkdir folder_name

.

Use 

ls

 command to see the directory is created or not.

I create a cli-practice directory in my working directory i.e. Desktop directory.

✔ Move Between Directories ➡ 

cd

:

It’s used to change directory or to move other directories. To use it type 

cd directory_name

.

Can use 

pwd

 command to confirm your directory name.

Change my directory to the cli-practice directory. And the rest of the tutorial I’m gonna work within this directory.

✔ Parent Directory ➡ 

..

:

We have seen 

cd

 command to change directory but if we want to move back or want to move to the parent directory we can use a special symbol 

..

 after 

cd

 command, like 

cd ..
✔ Create Files ➡ 

touch

:

We can create an empty file by typing 

touch file_name

. It’s going to create a new file in the current directory (the directory you are currently in) with your provided name.

I create a hello.txt file in my current working directory. Again you can use 

ls

 command to see the file is created or not.

Now open your hello.txt file in your text editor and write Hello Everyone! into your hello.txt file and save it.

✔ Display the Content of a File ➡ 

cat

:

We can display the content of a file using the 

cat

 command. To use it type 

cat file_name

.

Shows the content of my hello.txt file.

✔ Moving Files & Directories ➡ 

mv

:

To move a file and directory, we use 

mv

 command.

By typing 

mv file_to_move destination_directory

, you can move a file to the specified directory.

By entering 

mv directory_to_move destination_directory

, you can move all the files and directories under that directory.

Before using this command, we are going to create two more directories and another txt file in our cli-practice directory.

mkdir html css
touch bye.txt

Yes, we can use multiple directories & files names one after another to create multiple directories & files in one command.

Move my bye.txt file into my css directory and then move my css directory into my html directory.

✔ Renaming Files & Directories ➡ 

mv

:

mv

 command can also be used to rename a file and a directory.

You can rename a file by typing 

mv old_file_name new_file_name

 & also rename a directory by typing 

mv old_directory_name new_directory_name

.

Rename my hello.txt file to the hi.txt file and html directory to the folder directory.

✔ Copying Files & Directories ➡ 

cp

:

To do this, we use the 

cp

 command.

• You can copy a file by entering 

cp file_to_copy new_file_name

.

Copy my hi.txt file content into hello.txt file. For confirmation open your hello.txt file in your text editor.

• You can also copy a directory by adding the 

-r

 option, like 

cp -r directory_to_copy new_directory_name

.

The 

-r 

option for “recursive” means that it will copy all of the files including the files inside of subfolders.

Here I copy all of the files from the folder to folder-copy.

✔ Removing Files & Directories ➡ 

rm

:

To do this, we use the 

rm

 command.

• To remove a file, you can use the command like 

rm file_to_remove

.

Here I remove my hi.txt file.

• To remove a directory, use the command like 

rm -r directory_to_remove

.

I remove my folder-copy directory from my cli-practice directory i.e. current working directory.

✔ Clear Screen ➡ 

clear

:

Clear command is used to clear the terminal screen.

✔ Home Directory ➡ 

~

:

The Home directory is represented by 

~

. The Home directory refers to the base directory for the user. If we want to move to the Home directory we can use 

cd ~

 command. Or we can only use 

cd

 command.

🛠 Tools I used for this tutorial:-

  1. Fluent Terminal
  2. Git Bash Shell

Thanks for reading and stay tuned.🙂👋

Previously published at https://dev.to/iamismile/command-line-tutorial-58km

No tags for this post.

Related posts

Some Fabulously Faithful Focal Fossa Fashion

The fabulously fervent folks in the Ubuntu France community have fashioned Ubuntu’s latest mascot animal into a first-rate new t-shirt design.

Not that that particular activity is new; the Ubuntu France team has created custom artwork to ‘showcase’ the past few Ubuntu releases — as you may well know if you follow this site over on Twitter:

Pretty nice, huh?

Since the Ubuntu shop closed down there’s no “official” outlet to get Ubuntu-branded merchandise like t-shirts. This has lead some amateurs to try their hand at producing a few bespoke efforts… to varying degrees of success!

But with a new Ubuntu LTS release looming, and a characterful new animal and adjective combo to colourfully illustrate, the French Ubuntu team has turned out a terrific new t-shirt — one that I think is their best work yet:

The new design will be available to buy in the coming months from the Event Libre website. It’ll be available in unisex and women’s fits, both on organic ring-spun cotton, priced from €15 (which will increase to €20 after the Ubuntu 20.04 release).

It would be available to purchase a bit sooner but… Well, you know why it isn’t.

Shop listing for the Ubuntu Focal Fossa T-shirt

On the subject of custom focal artwork, I’ve really enjoyed seeing what people have come up with using the Ubuntu 20.04 artwork assets that have been made available. There’s a great thread on the Ubuntu Discourse.

Although my little Ubuntu MATE powered microPC isn’t yet running Focal, I felt it needed a little feline flair to it anyhow…

Thanks Olive

Related posts

Tagged : / / /

Forking Great: the Arc GTK Theme Lives!

A fork of the Arc GTK theme is available on GitHub and it picks up exactly where the theme’s previous authors left off.

Why is this news? Well, you may recall I wrote about the dire state of Arc’s maintainer-ship a few weeks back. To put it bluntly: there isn’t one.

But a number of you got in touch with me after I published that post to let me know about a new, actively maintained, albeit unofficial, continuation.

And boy I am glad that you did!

This isn’t a stale fork of the Arc theme code, either. This is an actively maintained branch with lots of bug fixes and other finesse to bring the theme as up-to-date as possible.

‘Jnsh’ is the developer behind the revival and says the plan is to keep the theme “…updated with new toolkit and desktop environment versions, resolve pre-existing issues, and improve and polish the theme while preserving the original visual design.”

I.e. exactly what you want to hear, right?

There’s no official stable release as of writing, but a formal versioned drop of the ‘next-gen’ Arc theme is due to appear in the coming weeks. That version will carry lots of fixes, including a bunch for the recent GNOME 3.36 release.

Until then, you can manually build the “new” Arc theme from the source code that’s up on GitHub:

View the Arc theme (fork) on GitHub

If you’re running Arch Linux then the ‘next-gen’ version is already available in the AUR.

While it’s a shame that the original author of the Arc theme hasn’t given the keys to the repo over to the new team (meaning a lot of distros are still packaging the old, buggy version) the plus side is that you at least you now know about it.

I’ll update my “how to install the Arc GTK theme on Ubuntu” guide In due course — though spoiler: it won’t be remains quite as succinct as running apt install arc-theme, like presently.

thanks to all who sent this in

Related posts

Tagged : / / /