Keeping an eye on your server

by David Williams on May 13, 2014 3 comments

Knowing how to interrogate your machines and get the *precise* information you need from them is a pretty wide-ranging topic. There are any number of ways of approaching this depending on what you need the data for. I’m going to go through some of the ways we keep an eye on our hardware, and where necessary, get detailed data from the server for analysis. Please note that the tools and servers we’re looking at here are Linux and may not be applicable to other server environments.

One of the main things we need to be aware of to get a gauge of how well the server is running is the server load average. This is essentially CPU usage calculated over time intervals (usually 1 minute, 5 minute, and 15 minute). We can see the load averages at a glance using the ‘uptime’ command:

root@jasmine [~]# uptime
15:41:41 up 126 days, 1:40, 2 users, load average: 0.10, 0.07, 0.05

This displays (among a few other things), the 3 load averages. A load average of 1 for a computer with 1 CPU core means that the CPU is fully used – anything over 1 means it’s overloaded. Our machines have 8-core CPUs, so we can see that this machine is very calm at the moment – a load of >8 would indicate overloading.

However, this command doesn’t give us much, and is entirely static. A better command for keeping an eye on the load and the rest of the system utilization is the ‘top’ command, perhaps the single most useful all-rounder for seeing system status at a glance:

topscreenshot

This command gives us an interactive and continuously updated display with the uptime, load averages, number (and status) of running tasks (processes), the CPU usage, the memory usage, and then a list of running processes which we can order in various ways. It’s essentially a command-line analogue of the task manager (windows) or activity monitor (osx), albeit a lot more flexible in some ways.

This shows us a sampling of the current running processes, but what if we need to see a comprehensive list of every single running process on the machine? We can use the ps command. The ps command is used for getting information about processes in various ways, in this instance we want to use the a u and x arguments to give us a detailed list of every single running process. The output of ‘ps -aux’ looks something like this:

psaux

We can look through this to get more of an idea of everything that’s currently running, and if we know what we’re looking for we can search through the output (via the ‘grep’ command for example) to determine whether the expected processes are there or not. This is great, but it’s very hard to read if we just want a summary of the current processlist. There are various utilities that do this, but why not write a quick one that does exactly what we want in LiveCode Server?

#!/usr/local/cpanel/cgi-sys/livecode-server

put $0 into tUserFilter
put shell("ps aux") into tProcessList

repeat for each line tLine in tProcessList
if tUserFilter <> empty and word 1 of tLine <> tUserFilter 
then next repeat
add 1 to tOutputArray[word 1 of tLine][word 11 to -1 of tLine]
end repeat

repeat for each key tUser in tOutputArray
put tUser & ":" & return
repeat for each key tProcName in tOutputArray[tUser]
put tab & tProcName & ":" && 
tOutputArray[tUser][tProcName] & return
end repeat
put return
end repeat

This little script gives us a nice summary of what the running processes are for each user and how many of those processes are running. We can also give it a user to filter by if we only want to see processes from that user using the $x variables – command line arguments passed to a LiveCode app are accessible by using $ and their position in the argument list (in this instance we use $0 to access the first argument passed, which we expect to be a username).

psdigest

The above tells us all about load and processes, but what if we want to know about what those processes are doing with the disk? There is some standard functionality in Linux for this, but we often use a great utility called ‘iotop’ (non-standard) to get this info, which functions in a similar way to top:

iotop

We can see, again, that this server isn’t very busy. One other thing we may want to know about is the current open connections and their status, which we can check using netstat. We almost always use netstat with the -n argument, as otherwise it will try and lookup the hostnames of the addresses involved, which is time-consuming.

netstat

This information contains a list of all the open TCP and UDP connections on the server, with the local address:port and the remote address:port pairs for each. This is very useful as we can quickly look through or parse this data for various purposes, for example we can see all the current mysql connections by using grep (the Linux search command) with the mysql port number:

netstat -n | grep 3306

This is useful as we can see where all these connections are coming from, which can aid in diagnosing certain types of issue. We could expand on this slightly to simply get the number of current mysql connections using the wc (word count) command with the -l (count lines) option:

netstat -n | grep 3306 | wc -l

These are some of the basic command line utilities we use to gather information about the system. There are many many more tools of various complexity that we use which would most likely take a small novel’s worth of blog posts to cover. At the heart of it, being able to query the system for specific information at the command line is often more fast an effective than using a complex GUI-based program that may not tell you precisely what you’re looking for.

read more
David WilliamsKeeping an eye on your server

Matrix Multiplication and Fullscreen Mode

by Ian Macphail on May 9, 2014 6 comments

In the old days, things were pretty simple. If you wrote an app for an iPhone, you knew it would look the same on everyone else’s iPhone too. You could also write a version of your app to fit the fixed size of the iPad. And if you were feeling extra fancy even create an alternate layout for the different orientations.

Then along came Android. As an open source operating system not tied to any particular vendor, device manufacturers are free to make devices with any size or shape they want, (within reason, i.e. a rectangle). Consequently, You could be assured that whatever size you designed your app within, your users would end up running it on a device with totally different dimensions.

To tackle this problem we’ve introduced fullscreen modes that allow your app to scale to fit the bounds of the screen in a number of different ways, depending on whether your app will allow some parts of the stack to go offscreen or use as much screen space available while keeping all stack content visible.

Each fullscreen mode can result in different scaling values, as well as repositioning to keep the stack content centred on the screen. What we end up with then is a series of transforming operations to be applied when drawing the stack to convert the coordinates to their new size & position.

Within the engine we use a common mathematical trick to represent different types of coordinate transformations – matrix multiplication. I won’t go into too much detail on matrices, (you can read more here), but effectively what they provide is a way to combine multiple different types of geometrical operations into a single object. A matrix can represent a rotation, translation (movement along the x and y axes), or scaling operation. A sequence of such operations can be represented as the multiplication of their representative matrices together, producing a single matrix equivalent to the whole sequence.

The operations to fit a stack’s content within the visible screen space can then be defined in terms of a single matrix, which is used whenever it is necessary to convert from logical stack coordinates to the coordinate system of the view into which it will be drawn. We can use the same matrix to draw the stack, or to convert the location of mouse clicks or touch events, or to make sure pop-up menus and dialogs are positioned correctly regardless of the scale or position of the stack within its window. This ensures that the appearance of the stack and the location of any events related to it are kept consistent whatever the fullscreen mode might be.

read more
Ian MacphailMatrix Multiplication and Fullscreen Mode

Video or Written Documents?

by Heather Laine on May 1, 2014 24 comments

How is it best to deliver help and instruction? I started musing on this question when I got feedback from a customer on just how much he hated video tutorials as a medium to learn programming. Really, I thought? How common is that? His argument was that it’s much faster to read text, pull out the relevant bits and absorb the information you are looking for, and sitting through a video takes longer. I wonder how true this is and in what context. 

Of course, working in support, I’m acutely aware that most people would rather put their hand in a fire than read the instructions. In a perfect world, software would just leap out of the screen, grab you round the throat and yell "Don’t do that to me! Do it this way instead!" Or putting it another way, the interface would be so self explanatory that no-one would need any kind of instructional materials at all. For a complex programming environment like LiveCode, that’s a pretty tall order. You are just going to have to learn the language, sometime, somehow. Read the dictionary, work through some lessons, join one of our summer courses or get some Academy tutorials. But which? Some people prefer to read a book in the bath (our dictionary will probably keep you clean for a year), others might prefer to follow along to a video. The academies are a nice mix, you get both. Every video has a written accompanying document you can copy and paste from. I’d love to be a fly on the wall to see how people actually use them. Do you watch the video? Do you just read the document? How much does the video contribute to your understanding of the document and vice versa? 

I’m the kind of person that likes to learn specific things, when I need them. I am unlikely to sit down and read a book on Dreamweaver from cover to cover, but if I need to know how to do a specific thing like create a rotating gif, I’ll go and look it up. Usually not in the Dreamweaver help, which sucks, but by googling it, and grabbing a nice text tutorial. Yes, I realized in thinking about this, I would not look for a video. Videos annoy me. They mean I have to turn my music off to listen to them, and you can’t copy and paste from a video. I am the Queen of Copy and Paste!

My daughter, on the other hand, will never read anything if there is a video alternative. Reading seems to be something that does not come naturally to the younger generation. 

Relevant to these musings also is the question, is it better to have short self contained tutorials on specific things, or a longer more themed book, tutorial series or video course, going through a subject in depth? Our lessons are an example of the former, they are based around the theory of answering one specific question – how do I use Google Maps in LiveCode? How do I connect to an SQLite database? For me, this is a good way to learn. I’m not overwhelmed by a tome on the theory of storing and accessing data and including it in an app, I just do what I need to do, today. Over time, as I do more and more of these specific tasks, things fall into place and I reach that "aha" moment where it all starts to make sense and I can flexibly create new items from what I have learned and understood. But other people might be happier with a soup to nuts guide before they start on their own first app. 

So where do you guys fit? What comes naturally to a programmer? How and when did you reach your "aha" moment?

This is Lily’s take on this complex issue, after considerable thought:

clip_image002

read more
Heather LaineVideo or Written Documents?

Livecode 7 – ‘put 0 into slowdown’

by Sébastien Nouat on April 30, 2014 9 comments

Since we’ve finished the refactoring of the engine, with every functionality working as it should, a performance issue was raised by several community members, on different parts of the new engine. I have been working closely with Ali on this aspect, and here is my take on it, with fewer eggs but a nice musical clef.

We were of course aware of this slowdown, which was mainly caused by the fact the engine was working in a uniform way, regardless of the kind of characters in the strings in use. This consideration of the strings was the first part of the global refactoring plan, to allow us to bring modifications impacting the whole engine instead of micro-changes targeting one area, as had been done on the former versions; the pitfalls behind a single modification were obviously numerous.

Chiefly, the main slowdown comes with the handling of Unicode: there is much, much work to be done any time a string operation is executed. As Fraser explained it in his blog post, Unicode is not simply the ability to use characters out of the ISO-8859-1 encoding. It also comes with all the subtlety of introducing characters longer than one character, be it:

– combining chars, to add as many accents as needed on the same letter (this type of character OS X gives you easily, since it uses the NFD form).

clefsurrogate pairs – who uses music? The treble clef, pictured left, is one of these characters stored as a surrogate pair. 

 As one may guess, it becomes slightly more difficult to compare two strings when they can be the same, even having a different number of characters – even worse, finding a substring within a string is an operation starting from the beginning, since no character indexing is valid when combining chars or surrogate pairs are present. And since LiveCode users love to use ‘items’, ‘word’, ‘line’ or any of the new chunks introduced in 7.0, it would be best to bring the script executions back to their pre-Unicode timing, when possible.

The main goal has been to avoid as much as possible using the CPU-costly Unicode functions, which boils down to storing more states for a string – is it native, combined, does it include surrogate pairs? This is makes the string operations work differently according to their content, and discards any slowdown which could be caused by Unicode’s intricate rules. In the end, some operations even became faster than they were before – ‘before’ shows it for instance.

In the same way, the engine is now clever enough to keep in mind when a string has been converted to a number. Since a LiveCode variable only stores strings, that is something which comes quite handy when used in a loop, and probably could be tagged as speedup!

Following on the examples coming as bug reports against the slowdown, here is a comparison between 6.6.1 and (future) DP-3:

 

LC7speeds

 

That should have you enjoying the DP-3!

read more
Sébastien NouatLivecode 7 – ‘put 0 into slowdown’

Expediting Execution, or Eggs-pediting Eggs-ecution

by Ali Lloyd on April 29, 2014 3 comments

Almost all of the drive towards getting out the first DP of LiveCode 7 consisted of squashing bugs and improving general stability. In other words, we were focused almost entirely on functionality. Now that the list of bugs has dropped to a manageable level, we have turned our attention to improving speed (don’t worry though, we haven’t forgotten about your bugs!)

If you’ve read Fraser’s blog post about storage of unicode strings you’ll know that certain on-screen characters take up more space in memory than others. Most of the awkwardness in manipulating and processing strings comes from having to deal with this possibility. This means that where previously accessing a character of a string was just a constant time array access operation, now it requires mapping the code unit indices to character indices. “You can’t make a Unicode omelette without having to crack a few performance eggs,” if you will.

So one way we’ve improved performance is by keeping track of when the strings have such anomalies. This is what I’m thinking of as the ‘count your chickens before they are hatched’ optimisation. Each code unit is an egg, and the characters are the resulting chicks. If I’ve promised to give you my 20,000th chick, I can simply give you the 20,000th egg, knowing that there is a one-to-one correspondence between them. Otherwise, I would have to go through them one by one inspecting each egg to see if it has hatched before being able to declare a certain chick as the 20,000th. 

You did say number 20,000, right?
You did say number 20,000, right?

Another change that has been made to speed up the engine is what might be called the ‘don’t put all your eggs in one basket’ optimisation. The basket, in this case, is the array of 16-bit code units which is used to store the Unicode string. Now we have two baskets, the second being an array of 8-bit code units. We have all our large eggs in one basket, and small in another. We’re not so much concerned about dropping the basket and losing the eggs, but more about being able to use shortcuts – sorting them out in this way should make it much easier for the eggs to be examined and put in appropriate boxes, for example. Similarly there are many more efficient methods we can use for string processing with arrays of 8-bit code units.

eggs

Speaking of boxes, another major slowdown in LiveCode 7 was related to the storage of LiveCode variables. All LiveCode variables were being stored using an internal concept of ‘value’, and never as native C types. In particular, this meant that numeric variables were stored as MCNumberRef, essentially just a wrapper round a native type. Adding to a number involved the unboxing from and re-boxing into an MCNumberRef, a relatively expensive operation. The change to allowing certain native types is roughly the equivalent of having a built-in egg holder in your fridge versus having the actual box in there.

If you thought that analogy was a stretch, then you perhaps ought to stop here. I’ll be honest, I thought there were more chicken and/or egg phrases that I could adapt for my purposes.

Benjamin Franklin once said “An egg today is better than a hen tomorrow.” How remarkably prescient of him, about 250 years before the fact, to envisage how we would be approaching the task of improving our Unicode string comparison operations. For he must surely have been referring to the following example:

put "a" before tLongUnicodeString
if tLongUnicodeString begins with "a" then
            // do something
end if

Unfortunately checking in general if tLongUnicodeString begins with another string is distinctly non-trivial if the formSensitive is false. This is because there is no way of knowing in advance what displayable characters it consists of, and how many code units might make up those characters; it may be that the whole string is only one displayed character, if it has an absurd amount of combining characters. So even if the first displayed character of the string is in fact a single code unit, we would have to normalize the entirety of tLongUnicodeString simply in order to perform the check. To use Franklin’s rather bizarrely chosen fowl metaphor, the normalized form of tLongUnicodeString is the ‘hen’ which takes a (metaphorical) day to normalize, whereas the first displayed character of it is the ‘egg’. If we could normalize on the fly, then we would only have to wait for the ‘hen’ if the strings were in fact equal. If not, we could use the ‘egg’ today to return a result very quickly.

These are just some of the optimisations we have implemented for LiveCode 7, so we think you’ll notice significant speed improvements in DP 3.

chickens

read more
Ali LloydExpediting Execution, or Eggs-pediting Eggs-ecution

What else can you do with LiveCode?

by Hanson Schmidt-Cornelius on April 28, 2014 1 comment

In my last blog I discussed how much fun it can be for kids to work with LiveCode, but have you ever thought about how much fun it is for adults too? And I am not taking about writing code. LiveCode is so versatile that you can also do other things with it. In particular, I have come across two other interesting applications for which people use LiveCode:

1. An Image Editor/Drawing Tool

Yes, I have found that LiveCode is being used as a drawing and image editing tool, and you can literally draw to your heart’s content.

Used as a Drawing Tool

Used as a Drawing Tool

You can drag and drop controls onto a card and update a range of graphics and blending effects through the property inspector of the particular objects.

You can import images by selecting “File > Import As Control > Image File…” and then use the controls in the bottom half of the tools pallet to update and alter the image.
Now you may think the magnifying glass is missing from the tools, but don’t despair, LiveCode has one of these handy little gizmos too. You can zoom into a region of the image with a drawing tool that you want to apply to the image: Select a drawing tool and press cmd+Mouse Button on Mac or Ctrl+Mouse Button on Windows over the area you want to edit and a new window opens in which you can update the enlarged pixel area.

Once you have finished updating your image, you can save it with a command like:

export snapshot from image “YOUR IMAGE NAME” to file “MY-IMAGE.png” as PNG

or something similar if you want to export a capture of some other control or even the entire card.

2. A Presentation Editor

Building on the use of LiveCode as an Editor/Drawing Tool, a logical extension is the creation of presentations.

LiveCode as a Presentation Tool

LiveCode as a Presentation tool

Admittedly this does require a bit of coding in order to navigate between cards, but this is relatively trivial, and of course LiveCode has a large range of visual effects that you can use when transitioning between slides.

The nice thing about slides written in LiveCode is that you are not bound to the particular device you created the presentation on. You can transfer the presentation to other hardware architectures and operating systems.

Any Other Ideas?

 If you can think of anything else that LiveCode could be used for, other than programming of course, then I would love to hear from you.

read more
Hanson Schmidt-CorneliusWhat else can you do with LiveCode?

The Hacked Server Hangover

by Arnaud Bouchot on April 24, 2014 4 comments

Every Server Admin managing a shared hosting environment knows what the hacked server hangover is like:
– it is painful
– it is frustrating, because it happens despite the strict security you already put in place, already having an antivirus in place, strict firewall rules, upgraded services…
and
– it almost always starts with the same – boring – story
“once upon a time one of your users installed WordPress and forgot about it”, leaving it in a corner of his space and letting it get to a point where it’s several versions older than the current releases -> full of security holes.
I could talk for much longer about these CMSes, Not only WordPress, but basically the security holes are more often originating from the third party templates, plugins, or custom developments rather than the WordPress or other CMSes themselves. These old unmaintained installations make the ideal ground for all the exploit scanners that intruders/hackers are using to get access to your servers. Most of the time a malware has been injected after a series of XSS (Cross-site scripting) attacks or SQL injection exploits in the user’s folder.
The pain is propagating as far as it can to suck out all the resources it can take from your server, in some extreme cases you could also lose access to your own server… Generally, the first symptoms will come with an excess of emails in the queue – thousaaaaaaands – of garbage emails delivering the best offers to the whole world, from your shared server IP -> disaster, in this case where the server is used as a spamming resource, you will also find out that the infected user’s websites have been defaced and are now used to relay the spam sent and host temporary phishing pages to try to steal people’s sensitive data (generally banking details).
This also has a few exciting side effects like exposing your server to high CPU load, leading to such poor performance that you almost cannot SSH-in anymore.

It’s bad times and the fun has just started but action has to be taken as quickly as possible because you are in a very bad spiral of events; next steps are your IP being reported to the DNSBL (DNS Black Listing) -> server blacklisted -> Ip reputation downgraded – break down.

All you need is a hug, a fast, efficient painkiller and a good way of preventing it from ever happening again.

You can ask around for the hug, if you are not spending too much time on your servers I’m sure you’ll find a friend – easy.

I can provide painkillers :

Linux Malware Detect (LMD) is a malware scanner that detects and removes malware on your Linux server. It says it all. It is particulary great for webservers in shared hosting environment and this is LMD’s main focus.

The threat landscape in shared hosted environments is different from the scenario typically addressed by the standard Anti Virus products. These detect primarily OS level trojans, rootkits and traditional file-infecting viruses but are quite poor at detecting the increasing variety of malware at the user account level which serves as an attack platform.

Linux Malet uses MD5 file hashes and HEX pattern matches as signatures.
These signatures are built from malware data that was delivered by IPSes, community data, ClamAV and user submissions.

I have implemented LMD on every Linux Webserver that I install, even my own laptop:

let’s take it onboard.

Please note that you need a linux server, the root access and a bit of paranoia – also in case you are wondering, LMD is already watching all our On-Rev servers.

1) Download the latest LMD package.

# cd /tmp
# wget http://www.rfxn.com/downloads/maldetect-current.tar.gz

2) Install it

# tar xfz maldetect-current.tar.gz
# cd maldetect-*
# ./install.sh

3) configure it

(every param in the config file is commented)

# nano /usr/local/maldetect/conf.maldet

here's a sample file

# [ EMAIL ALERTS ]
##
# The default email alert toggle
# [0 = disabled, 1 = enabled]
email_alert=1

# The subject line for email alerts
email_subj="maldet alert from $(hostname)"

# The destination addresses for email alerts
# [ values are comma (,) spaced ]
email_addr="you@domain.com"

# Ignore e-mail alerts for reports in which all hits have been cleaned.
# This is ideal on very busy servers where cleaned hits can drown out
# other more actionable reports.
email_ignore_clean=0

##
# [ QUARANTINE OPTIONS ]
##
# The default quarantine action for malware hits
# [0 = alert only, 1 = move to quarantine & alert]
quar_hits=1

# Try to clean string based malware injections
# [NOTE: quar_hits=1 required]
# [0 = disabled, 1 = clean]
quar_clean=1

# The default suspend action for users wih hits
# Cpanel suspend or set shell /bin/false on non-Cpanel
# [NOTE: quar_hits=1 required]
# [0 = disabled, 1 = suspend account]
quar_susp=0
# minimum userid that can be suspended
quar_susp_minuid=500

4) Manual Scans and Usage

You may just wanna scan the “suspect” user, so that’d be the command you’d want to run.

# maldet --scan-all /home/suspect/

The following command is good to know too, If you previously ran a scan but forgot to turn on the quarantine option, you still can quarantine or clean all previous malware scan results using their SCANID :

# maldet --quarantine SCANID

OR

# maldet --clean SCANID

5) Daily Scans

By default LMD script is located here /etc/cron.daily/maldet and it performs a daily scans, update of signatures, quarantine etc, and sends a daily report of malware scan to your specified emails. The apache folders are already listed but if you need to add additional paths to be scanned, then you will need to add them here:

# nano /etc/cron.daily/maldet

Voila, after having cured a few servers and saved -a lot- of time using LMD I felt I had to tell you about it, hoping it will be as useful to some of you as it has been to me.

Finally I would like to add a little something about WordPress, I like wordpress and the only reason I have mentioned it in our catastrophe scenario above is because this is by far the most popular CMS and so you get a higher probability to find out it has been the gateway of some hack, just because of it’s popular. This does not mean that WordPress is less secure than other CMSes like Joomla or Drupal. As long as you keep your WordPress up to date and respect a few things you’re ok, I could eventually detail these rules in a future blog post.

read more
Arnaud BouchotThe Hacked Server Hangover

From Cocoa with Love

by Mark Waddingham on April 23, 2014 11 comments

There’s lots been going on at RunRev lately development-wise. Maintenance on the 6.6 cycle has been trundling along, 7.0 with its Unicode support is rapidly maturing and in between these two things sits the project that has been my main focus for the last few months – 6.7.

The main goal of 6.7 is to rework the Mac-specific parts of the engine to use the Cocoa framework rather than the (now deprecated, and aging!) Carbon / Classic frameworks. Not only will this allow LiveCode apps to be sandboxed (there are numerous bugs in Mac’s implementation of sandboxing for Carbon apps) and thus submitted to the Mac AppStore once again but it also means that things like revBrowser work much better as the browser control can be embedded directly in the stack window.

One of the most challenging parts of porting the engine to use Cocoa has been (as far as possible!) to retain identical functionality as before. Cocoa is a very high-level framework and as such likes to do things in a very specific way. Given that LiveCode is also a very high-level framework there is a certain amount of ‘creative coding’ required to bend Cocoa around to LiveCode’s way of thinking.

For example, Cocoa will not send continuous ‘windowMoved’ messages (unlike Carbon), you only get periodic updates when the user pauses the movement. This did cause a bit of consternation, but after some googling, some hair pulling and a nifty use of an auxiliary thread and window server interrogation, LiveCode can still enjoy appropriate moveStack messages (you can see the patch for that here https://github.com/runrev/livecode/pull/613).

All-in-all though, the transition to Cocoa has been relatively smooth – a cleaner, more well-defined separation between the platform-specific part of the engine and the rest is emerging and we’re approaching parity with the non-Cocoa feature set.

Anyway I’d better get back to my 6.7 bug fix list – we’re hoping to get 6.7-dp-3 ready soon which will hopefully be the last before we start the RC cycle!

read more
Mark WaddinghamFrom Cocoa with Love