It is already 12 months since I started using my (first since very long time) desktop PC. In my previous blog post I’ve described reasons why I decided to resign from Apple hardware. In this one I will tell you how I managed to drop OSX and its ecosystem with minor pains.

I feel that I owe you also some insights – which applications I had to swap and which I was lucky to keep. Because I work with Java on daily basis I do not have any major troubles with portability of my software, however just few out many programs is actually written in Java.

Why the heck arch manjaro?

From my personal observation it looks that most of people who have any contact with Linux ecosystem can distinguish major distributions such Debian, Ubuntu, CentOS and Red Hat. Of course there are many more. I’ve chosen arch based distribution mainly because at time I was installing it, it was one of very few which supported F2FS. I have NVME drive and I wanted to get as much performance as possible. Java is compiled language and compilation always involves reading sources and writing compiled code to disk. I build quite large open source projects, sometimes few times a hour. Other Linux distributions supported F2FS, however not as system partition. Keep in mind it was almost a year ago and Ubuntu/Debian likely improved its support for this filesystem.

From biggest benefits of running arch-based distribution I really appreciate fact that I don’t need to reinstall system in order to migrate to new major version. Again, it is possible with other distributions, however its not recommended way for many of them and/or requires manual steps. As learnt from OSX experiences – update process can get tricky. Last thing I wanted to suffer after dropping Mac was waiting long hours for backup restoration. Manjaro/Arch, are only one distribution which I know, giving a simple promise – there is a rolling release. It means that system which you have installed will be moving forward. You have guarantee you stay on latest versions of packages or very close to them.
This is pretty much how open source development looks a like – using latest releases if possible.

Porting applications

One of biggest changes I had to accept was package system. As almost every user who program under Apple OSX, I used Homebrew for several years. I installed with it many things, including vim, sublime as well as quite a lot others who I can not pull right now. Two years ago I managed to get brew (actually linuxbrew) working on linux in order to provision vagrant box. Sadly brew project for years didn’t want to support linux. I can understand that as variety of software versions used under Linux can easily overwhelm. I ended up with a fresh system installation and no “package” manager.

What saved my life was Docker. If I would have Facebook account and I would need to describe my relation with Docker, then most likely I would summarize it as “it’s complicated”. I enjoy running some things in Docker on my NAS, but I don’t need to upgrade these containers to often. Each time when I have to do anything with them I have to remind myself command syntax. Till I was forced to use it I felt that I didn’t know docker well enough. I think I still miss a lot of knowledge about it.
Docker solved for me issues with lack of good quality ActiveMQ and Cassandra packages under Linux. I was also able to use company specific containers to run parts of product responsible for user interface, report generation and so on which previously I ran using nginx and some fancy tools such phantomjs.

Development environment setup

Development environment which includes IDE (IntelliJ) – I got via toolbox installed via arch packages. I use DiffMerge and I was able to get it installed via package system too. I had to tweak checksums to match with latest releases, but beside that – it went well. Ad hoc editor (sublime) – is also available via packets.

Biggest change which I accepted while moving away from Apple was change of shell. I have used ZSH for years and I was happy with it. But as usual, there is always someone who will crush your confidence in a tool. That person for me was Zachary who I worked with at Rocana (acquired by Splunk). He said me that FISH is much better in handling completion and it is able to keep history of commands executed in given directory. This was something which got me interested but not that much to test it immediately. It turned out to be a very good recommendation and I started using fish as my primary shell after moving to linux. Beside that I also started using Terminator who makes a lot of sense on 4k screen where tabs in terminal emulator leave simply to much of empty space.

I been also a long time user of Source Tree, a handy application allowing to manage git repositories.
Don’t get me wrong, I am able to work with git from command line for most of the time, but sometimes when I have more changes or refactoring I prefer to be picky in checking in files or even their parts. I never got used to any IDE for that because there are too many distraction points in there – switching to application which is limited to version control allows to focus on commit. Source Tree was very good in that, simple and clean user interface allowing to select chunks of file to be staged and then committed. Sadly Atlassian didn’t decide to port Source Tree to Linux so I had to look for alternatives.
I ended up buying Git Kraken. It offers subset of Atlassian’s product functionality but gives all things I been looking for, and it works. I was concerned by fact that it is yet-another-electron-app, meaning javascript running inside embedded browser pretending to be a desktop app, but as said before it works.

For quite long time I don’t use Adium nor Pidgin nor any other chat client. Under OSX I switched to Franz, yet due to its strange policy in version 5 which forces account registration I decided to move to Rambox. It does the same thing – wraps web page into tab inside of own window. There is one window to keep all instant messanger distractions in one place. Since most of communications nowadays goes over some form of cloud services, having a single window to aggregate them is useful to keep all distractions in one place.

Browser and surroundings

If you would ask me what I miss mostly from OSX – I would definitely point Apple Mail and synchronization of contacts between phone and computer. This thing alone is something I regret most. I do use Thunderbird, but it’s not the same. It really feels far worse than osx default app. I been using Thunderbird before 2008, then around 2014 for corporate mailbox and come back to it last year. General feeling after launching Thunderbird after several year break is – it didn’t change at all. In fact it looks as ugly as before, with confusing settings panel as always and lack of good contact plugin. I could have missed one, if you can recommend something – just post a name in comments.

While moving away from OSX I decided to drop Chrome. Why? Because chrome is the best tracking software ever made and I felt that I have to give it up. After my passwords, which I stored in the browser went to password.google.com where I can get them back in plaintext I got really concerned. It is not that I wasn’t aware that Google have them in the cloud, yet it was told everywhere it is not able to read encrypted copy of mine data.

My perception changed dramatically once something which is stored in browser was shown to me first time on web page. I remember old screen for looking passwords in Chrome. It wasn’t particularly great, but I would prefer it over having these on web page. More importantly I did not expect that change. Because of that I stopped using google as default browser and moved to Firefox with DuckDuckGo. It’s different, but not that much. I still turn google for some edge cases, however my development box – is owned by duck. 😉

Drawbacks

I’m really happy with my PC, yet – to be fair with you – there are several things which are quite annoying:

4k/hires/hidpi support

X11 window manager used by default with Gnome 3 under Manjaro does not support partial scaling. This means that it supports 100, 200 or 300 percent scale, but not 125%. This is a real pain because for me 100 is to little and 200 is way to big. It needs to be manually adjusted with xrandr. Usage of xrandr have side effects. For example not all apps gets scaled properly with it. Firefox is one of them. Wayland, which is supposed to solve that have troubles with nvidia cards. For now, I use 100% scale and have adjusted all apps to work with this scale (Qt4, Qt5). However each time when I visit new webpage I have to adjust it by zooming in somewhere between 130 and 170% depending on its layout.
I can stand differences between Qt5, GtK and other toolkits with no issues. I keep complaining on Thunderbird, but if I would be forced to stay with it, I still could manage it.
There is one more, even more annoying thing – I can’t get 60 Hz on my display for whatever reason. This means that there is no point in watching any movies on this box. Even on youtube as you will experience flickering. It’s good for staying away from movies, but lets face it – its a issue.

Linux drivers

Nvidia drivers are available, but they cause some troubles with Wayland (a replacement for X Server). Open source driver does not solve it too. If you consider getting there it might be good idea to check with Intel cards which are used for mobile devices. Friend of mine managed to get Wayland up on his ubuntized Dell XPS 15 with no issues.
I also have 7 years old Canon laser printer which was awful in terms of getting it work under macos. Debugging of issues was usualy solved via resetting CUPS system completely. I didn’t even try to get it running under linux. All printing and accounting I still do on OSX with dedicated software.

Power management works fine – I can put computer into sleep and it will wake up. It happens that it doesn’t get up, mainly after update of system without restart, then I need to ssh into it and restart display manager. I do see from time to time core dumps in logs, but I do not track them unless they become a source for real troubles.

Overall summary

I am quite happy with my current setup. Despite of many minor issues and troubles with displays I experienced under Linux I feel myself more productive than before. It’s not only about tools, because they are available under OS X, but computer performance. I don’t use it for entertainment and it works as expected. All issues so far I was able to solve with answers found online.

Linux as desktop is definitely not as nice nor as stable as OS X, there is still huge distance in terms of usability of both.

Yet, I’m happy to say that “This is Linux year”, at least for me.

PS. It took me just 7 months to finish this blog post since first draft made on February 2018. 😉

I must start from small confession. I am not an computer kido. My first computer was AMD K6 with 266 Hz clock I got for Christmas back in 1999. I’ve seen in my life Amiga, but I wasn’t part of long standing battle between platforms. I’ve seen Norton Commander on my friend PC who got his Pentium in 95, but I never had to run such tool on my own. Point of bringing whole history of myself coming to computers is to show you that I am relatively fresh to it.

Read the rest of this entry »

One of most important things, if not the most important in software, is release process. There are whole books written about “shipping software” and software release is one of key parts which needs to happen in order to deliver our programs to end users. In this very short post I will give you a short tip about how to do a test drive of a release which is not published yet. One of main principles of maven central is “what goes there, stays there”, meaning that anything which becomes public will stay public. For that reason we, as software developers, want to deploy things which are free of any major issues at release time.

Staged release is one of things which are supported by maven-release-plugin. Overall idea behind this is to let people have a test drive before deploying artifacts to public repositories from which they can not be removed. Of course this might be seen as completely unnecessary step if you release a small library, but could be extremely useful for bigger projects, avoiding something I would call a quick fix hiccup.

Quick fix hiccup

Situation when project get released often is welcome. But situation when project gets released with bugs is not welcome at all. Again, this might depend on actual use case, some small failures discovered during release might be accepted, some other can be a show stopper. A quick fix hiccup happens when project gets released and then people who start using it keep discovering important issues which lead to another release.
Lets assume we released a new major version called 3.0.0. With this release there is new issue discovered, so we release 3.0.1. After that we find another potentially dangerous bug generating new set of artifacts in version 3.0.2.
As you can see – in both cases bug fixes have been made, however people who already started using project released as 3.0.0 needs to bump it twice in very short time window. Wouldn’t it be better to do one release but without major bugs instead?

Staged release

Someone who been thinking about this problem come to very simple conclusion – if problem is impossibility or inconvenience caused by unpublishing of artifacts, then it might be better to hold on for a moment and let people test binaries before they get published.

Deployment phase in such release process is divided into two stages – first moving artifacts to test repository and second moving them from test repository to final one. A test repository can be anything – it can be an FTP server, maybe a filesystem. In general, it is just a place which is accessible to interested parties who would like to test our artifacts.
Once testing phase is done and no major issues are found artifacts are deployed to public repository. For regular projects that would be maven central or any other location accessible for wider publicity.

Testing of staged release

Here we are going to a technical part of this article – which is – how to become a tester of staged artifacts. I will use maven as reference, but you can use any other build tool which is capable of downloading contents from configurable locations.

Once we know what is location of our binary artifacts which can be used for testing we need to modify maven settings.xml to include new remote repository:

<profile>
  <id>karaf-4.1.4</id>

  <activation>
    <activeByDefault>false</activeByDefault>
  </activation>

  <repositories>
    <repository>
      <id>karaf-4.1.4</id>
      <name>Karaf 4.1.4 Stage Repo</name>
      <url>https://repository.apache.org/content/repositories/orgapachekaraf-1102/</url>
      <snapshots><enabled>false</enabled></snapshots>
      <releases><enabled>true</enabled></releases>
    </repository>
  </repositories>
</profile>

This is additional profile which can be quickly removed after release is finished or when testing is done. By this short piece of code you will force your maven installation to scan additional remote location for metadata and released artifacts. With such profile enabled you can build your software and verify that dependency which is about to get released works in your project.

How to build with staged dependency and rollback

It is important to be aware what is difference between maven install and package phases. Here, just as recap I will mark that in most of cases we should use package phase because install changes state of local repository. Profile defined above will local repository too by downloading things not available in maven central. In case when release will be cancelled binaries which we downloaded from staging repository will be invalid and their checksum will differ. More importantly your local copy of dependency will be outdated and will not contain any fixes deployed after.
For that reason usage of staged artifacts should be combined with temporary local repository to avoid above troubles in future. You can point alternate maven repository from command line via maven.repo.local property. You can also modify it in your maven settings, however it is less convenient. You can also create a temporary settings.xml which will be used only for testing and point it via --settings or -s option from command line.

Final thoughts

We all know there is no software which is free of bugs. Staged releases does not guarantee software without bugs, but they help a lot when software runs in many various environments and authors can not test nor identify all possible use cases.
Software quality depends on many factors. Quality assurance is very important, but all efforts put there are made to make sure that software works as expected for end users. Letting them run software, after all internal checks are made but before it gets published and announced ensures that software is free of any major bugs, at least the part of community which did test it.

A small note at the end on “administrative” overhead caused by staged releases. Article on maven.apache.org describes manual procedure necessary to setup Apache Archiva. This area has been improved in Archiva 1.4.

Apache Felix Configuration Admin (CM) is widely used component which is responsible for provisioning of one of most common OSGi services. Main responsibility of it is to bring configurations stored in property files to services.

While digging into Felix CM code I have found that it is able to create scalar values of certain type ie. Long, but also more complex structures such Array or Vector. The biggest issue was that I couldn’t find any way to force it to create array from string representation. Thanks to google (FELIX-4431 found on 4th page of results) and debugger goodnes I finally managed to do it. Here is recipe how to proceed.

Configuration file

Config file name which is source of properties must be named .config – otherwise array will not be created.
Property must be written as follows:

property=["value 1", "value 2", "value x"]

Internally config admin is also storing information about value type. By default created values and collections will consist elements of type String. If you wish to change type of collection following prefixes are allowed:

  • T = String
  • I = Integer
  • L = Long
  • F = Float
  • D = Double
  • X = Byte
  • S = Short
  • C = Character
  • B = Boolean

Small prefix letters represents simple type. If you want to construct array of primitive ints then configuration syntax is following:

property=i["1", "2", "3"]

Small note for Karaf users

By default Karaf etc/ directory uses *.cfg suffix as fileinstall filter which means that this feature of Felix Configuration Admin will not work for you. You have two workarounds.
Edit etc/config.properties and navigate to first line shown in listing and replace it with second:

felix.fileinstall.filter = .*\\.cfg
felix.fileinstall.filter = .*\\.(cfg|config)

Create new file org.apache.felix.fileinstall-config.cfg with following contents:

felix.fileinstall.dir     = ${karaf.base}/config
felix.fileinstall.tmpdir  = ${karaf.data}/generated-bundles
felix.fileinstall.poll    = 1000
felix.fileinstall.filter  = .*\\.(cfg|config)

Quick summary

I am using Configuration Admin service since years and I didn’t realize this feature exists and it’s supported since very long time. Hope that this will let you to go over your more complex configurations! 🙂

I use Eclipse since years. Some of you may say that I’m a masochist. Well, people have different preferences. 🙂 I prefer Eclipse over other editors.

What’s the pain?

Eclipse had same look and feel since years. I used to have the same appearance under Windows/Linux/OSX. Everything was the same except fonts. I was very unhappy with default Juno look and feel which looks like few widgets deployed in browser. Even web-based IDEs looks better than Juno! There was some posts about that and some solutions. However nobody told how to get older look and feel in place.

What’s the solution?

It’s really simple. Go to Preferences > General > Apperance and change Theme to classic.
Here hows Mac theme looks like:

Here hows classic theme looks like:

Thanks to this small change I may finally upgrade my environment to Juno. I just realised that my eclipse installation is almost 2 years old!

Piątego lutego miałem niekłamaną przyjemność podziwiać Jacka Laskowskiego prezentującego temat Praktyczne wprowadzenie do OSGi i Enterprise OSGi. Link do filmiku z prezentacją Jacka znajdziecie na Jego blogu. Tymczasem, poniżej wideo z Karafem. 🙂
 

Apache Camel supports a mapped diagnostic context which can be used to improve log entries, but also there is a log component which makes it easier to create log entries. Together they can be used to create foundations of activity monitoring without need to deploy another tool or database.
Read the rest of this entry »

Few months ago I’ve read an article written by my friend Jacek LaskowskiEnterprise OSGi runtime setup with Apache Aries Blueprint. In his article Jacek describes which bundles should be installed to get the blueprint working. As IBM employee Jacek can always promote IBM WebSphere in version X or Y which started (or will start) supporting blueprint as dependency injection mechanism. That’s not fine for these who do not run IBM producs and want something light. As you know, Aries and OSGi Blueprint is an alternative for old-fashion Spring approach. Read the rest of this entry »

One of bigest benefits of Java is byte code manipulation. You can change everything you want in your application without touching source code. That’s usefull for many cases, starting from legacy code, where we can’t simply modify and recompile library up to modern applications where aspects can be used to handle runtime exceptions. The most popular project is AspectJ which is part of Eclipse ecosystem, in this post I going to show you how to use AspectJ with Karaf.
Read the rest of this entry »

Few hours ago I’ve found an usefull post about preserving message order with ActiveMQ written by Marcelo Jabali from FUSE Source.

In his example Marcelo used broker feature called Exclusive Consumers. It lets send messages only to one consumer and if it fails then second consumer gets all messages. I think it is not the best idea if we have many messages to process. Why we wouldn’t use few consumers with preserved message order? Well, I was sure it is not possible, but during last training I’ve found solution.

Broker configuration


So how to force ActiveMQ to preserve message order? It’s really simple, we just need to change dispatch policy for destination. We can do this for all queues or only for selected.

<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:amq="http://activemq.apache.org/schema/core">

    <broker xmlns="http://activemq.apache.org/schema/core">
        <destinationPolicy>
            <policyMap>
                <policyEntries>
                    <policyEntry queue=">"><!-- Please refer 2nd part of this post -->
                        <dispatchPolicy>
                            <strictOrderDispatchPolicy />
                        </dispatchPolicy>
                    </policyEntry>
                </policyEntries>
            </policyMap>
        </destinationPolicy>
    </broker>
</beans>

After this consumers should receive messages in same order like they were sent from producer. You can find example code on github: example-activemq-ordered. You can run all from maven:

cd broker1; mvn activemq:run
cd broker2; mvn activemq:run
cd consumer; mvn camel:run
cd consumer; mvn camel:run
cd producer; mvn camel:run

Upgrade

After posting update about this blog post to Twitter Dejan Bosanac send me fewupdates. He is co-author of ActiveMQ in Action so his knowledge is much more deeper than mine. 🙂
First of all I mixed XML syntax. strictOrderDispatchPolicy is handled by topics, not queues. For second destination type strict order is turned on by strictOrderDispatch attribute set to true for policyEntry element. This preserves order but, as Dejan wrote, it will broke round robin and all messages will go to only one consumer, as in previous example given by Marcelo.

Also, Marcelo published second post about Message Groups which allows to preserve order and have multiple concurrent consumers on queue.

top