Eating Our Own Haggis

by Mark Waddingham on May 29, 2014 15 comments

My turn to blog has come around again and I thought today that I’d spend some time talking about a project I’ve been working on (quietly, in the background) for a while now.

As it stands LiveCode is a slightly different beast from most other programming languages and environments in that the LiveCode Language is tightly coupled to what you might call the LiveCode Framework. This LiveCode Framework is the functionality and object model you have access to when coding in the LiveCode Language – things such as stacks, controls, the message path and such. This integration is very tight (monolithic, one might say) and is quite different from more traditional environments such as Java and C.

The Java (and C) model is more along the lines of – you have a compiler, something capable of running the output of the compiler (virtual machine or the processor itself) and a very simple / low-level collection of functionality (control structures and basic types) which constitutes the language. In this model everything else is implemented in the same language (for the most part anyway) and provided to application programmers through packages (or libraries). The neat thing about this model is that you don’t have to leave the language to ‘extend’ the language, or indeed implement the majority of functionality that you expect from the environments. Perhaps crucially, if you can code in that language, then you can (essentially) extend the language (albeit through the generic function call / object style syntax we all know but perhaps are not great fans of – there must be a reason you like LiveCode, right?).

Indeed the state of affairs with LiveCode (as it stands) very much creates a rift – there is the engine implemented in C++ where the majority of the functionality we depend on is, and there is then the LiveCode Language which sits on top. If you can program in C++ then great, you can write externals or even submit patches to the engine itself; but if you do not program in C++ (or, indeed, do not have the time nor inclination to learn to program in C++) you are, to a certain extent, limited to what the engine gives you.

Wouldn’t it be great if you could for all intents and purposes extend the engine without ever leaving the LiveCode language?

I certainly think so and thus we come to the project I want to talk about. The ultimate goal of this project is nothing less than to reduce the C++ footprint of the engine to a core script compiler, virtual machine and basic type system with everything else being implemented in LiveCode itself (and who knows, perhaps it might even be reduced further over time – LiveCode implemented completely in LiveCode itself?). Of course, it is going to take quite a while to reach this goal, but the seeds are there and have been growing for a while – thus far principally through the refactoring work our team has been undertaking as part of LiveCode 7.

So, what does this actually mean for LiveCode developers as a whole?

Well, there will be a new dialect of LiveCode in which you will be able to write things called extensions. An extension will (in the first instance) either be a library or a widget.

Libraries are collections of commands and functions that integrate into the engine in exactly the same way as engine commands and functions do (the only caveat being initially is that you will be restricted to generic function / command call syntax to access them – full and proper syntax bindings will have to wait until Open Language is born).

Widgets are collections of commands, functions and event handlers – these will be used by a new control type widget and allow you to create you own controls which look, feel and act as if they were in fact part of the engine itself. Indeed, widgets will be very familiar to anyone who has ever written a control in Visual Basic, Delphi, or derived a new control from the base control class in any of the multiplicity of C++ GUI frameworks. This idea of controls is different from the aggregate style of custom control we currently see in LiveCode – rather than using a container in which you put other controls you instead get a ‘paint’ event and all the basic interaction events you need to craft your control entirely the way you want. To help you do this, you’ll also have access to an array of functionality crafted for the task – most importantly, you’ll be able to use a collection of syntax that provides 2d vector drawing capabilities along the lines of the HTML5 Canvas, or CoreGraphics library to draw your widgets.

I mentioned above that this will be based upon a ‘new dialect’ of LiveCode (I’m currently calling it LiveCodeish) but this shouldn’t cause alarm bells (hopefully, at least). Essentially the dialect will be a distilled form of the current LiveCode core syntax and semantics – this syntax will be cleaned up and its functionality will be much better defined (for example, getting rid of auto-conversion of arrays to empty strings and ‘fixing’ the non-standard ‘for x to y’ loop we have). The main aim of this clean up is to ensure we have a solid, predictable, reliable and completely defined base to work from without having to worry about any hangovers from the (very much organically evolved) past.

[ I think it is important for me to mention is that we do not intend there to be two LiveCode languages – the language you use for extensions (what I’m calling LiveCodeish) and the language you use in object scripts. It is our intention that there will be one, and only one, LiveCode – the object script language will also move to LiveCodeish, once we’re happy we’ve got LiveCodeish right (and don’t worry, when LiveCodeish does arrive at the LiveCode script level your existing scripts will all continue to run as they do now and you’ll be able to translate them automatically over time as needed with the script translation tool we are planning as part of the syntax cleanup project that will be enabled by Open Language). ]

There are perhaps three aspects of this proposed new LiveCodeish language which deserve mention.

The first is that this new dialect will provide the ability to directly hook into native code through binding to functions in native code (whether that be Java, Objective-C or C/C++) thus, hopefully, eliminating the need to write ‘glue-code’ externals and, again, making extending LiveCode much more accessible to anyone that can code in LiveCode rather than those few who can, or have the time to delve into these lower-level environments.

The next is that it will (eventually) be typed in a very natural way. You will be able to declare variables and handler parameters as having a given type (by default their type will be any). Whenever you try to assign a value to the variable the engine will attempt to convert it to the appropriate type and only throw an error if this is not possible. For example a variable with an integer type can have an integer put into it directly, a real number can be put into it after its been automatically rounded, and even a string can – as long as the string can be converted to an integer. This simple ability will mean that it will be easier to write ‘correct’ code – you’ll be able to much more easily debug things where it could be type-mismatches that are causing errors. It also means that (in the future) suitably typed code will run more efficiently – for example if you have an integer variable then that can be much more efficiently represented internally than a variable which can hold any type.

The third thing about LiveCodeish which is quite different from normal LiveCode scripts is that there is no message path – well, the message path is there, it’s just that command and functions in extensions do not sit in it. This is exactly the same situation as current engine commands and functions – they are bound at compile time. This not sitting in the message path does not mean that extensions have no access to it, quite the reverse as they will have access in the same way as engine syntax does – you will be able to send messages to objects through the message path just as the engine can. This slight change in perspective brings LiveCodeish much closer to more ‘traditional’ languages – and means that over time a much larger range of optimization potential becomes available (as things are more static in this world – dynamicity is great, except when you want to optimize at compile time!). Indeed, I can see no reason that LiveCodeish should not be eventually compilable to native code with performance at a level where the decision to code something in C or LiveCodeish due to that metric is a non-issue – LiveCodeish will be amply able.

So, there’s been a lot of words here so far about what will be but not many examples. We’re not quite at a point yet where we have something to show along these lines (although it is my principal project right now), but what I can do is give you a theoretical example of what a piece of (useful!) (untyped) LiveCodeish might look like. Here follows a potential future example LiveCodeish script for adding support for the ‘aspell’ library (note that this code might never actually run, it is still somewhat thought-experiment syntax!):

 library com.runrev.aspell
external function new_aspell_config() is pointer from “aspell”
external command aspell_config_replace(pointer, cstring, cstring) from “aspell”
external function new_aspell_speller(pointer) is pointer from “aspell”
external command delete_aspell_config(pointer) from “aspell”
external function aspell_error(pointer) is integer from “aspell”
external function aspell_error_message(pointer) is cstring from “aspell”
external function aspell_speller_error_message(pointer) is cstring from “aspell”
external command delete_aspell_can_have_error(pointer) from “aspell”
external comamnd delete_aspell_speller(pointer) from “aspell”
external function to_aspell_speller(pointer) is pointer from “aspell”
external function aspell_config_retrieve(pointer, cstring) is cstring from “aspell”
external function aspell_speller_check(pointer, cstring, integer) is integer from “aspell”
external function aspell_speller_suggests(pointer, cstring, integer) is integer from “aspell”
external function aspell_word_list_elements(pointer) is pointer from “aspell”
external function aspell_string_enumeration_next(pointer) is pointer from “aspell”
local sSpeller command spell_set_dictionary_language pLanguage local tConfig put new_spell_config() into tConfig aspell_config_replace tConfig, “lang”, pLanguage aspell_config_replace tConfig, “encoding”, “utf-8” local tNewSpellerOrErr put new_aspell_speller(tConfig) into tNewSpellerOrErr delete_aspell_config tConfig if aspell_error(tNewSpellerOrErr) is not 0 then local tErrorMsg put aspell_error_message(tNewSpellerOrErr) into tErrorMsg delete_aspell_can_have_error tNewSpellerOrErr throw tErrorMsg end if if sSpeller is not 0 then delete_aspell_speller sSpeller end if put to_aspell_speller(tNewSpellerOrErr) into sSpeller end spell_set_dictionary_language function spell_get_dictionary_language return aspell_config_retrieve(aspell_speller_config(sSpeller), “lang”) end spell_get_dictionary_language function spell_check_word pWord local tResult put aspell_speller_check(sSpeller, pWord, the number of characters in pWord) into tResult if tResult is -1 then throw aspell_speller_error_message(sSpeller) end if return tResult is 1 end spell_check_word function spell_check_not_word pWord return not spell_check_not_word(pWord) end spell_check_not_word function spell_suggest_spellings pWord local tResult local tSuggestions put aspell_speller_suggests(sSpeller, pWord, the number of characters in pWord) into tSuggestions if tSuggestions is 0 then throw aspell_speller_error_message(sSpeller) end if local tElements put aspell_word_list_elements(tSuggestions) into tElements try repeat forever local tSuggestedWord put aspell_string_enumeration_next(tElements) into tSuggestedWord if tSuggestedWord is empty then exit repeat end if if tResult is empty then put return after tResult end if put tSuggestedWord after tResult end repeat finally delete_aspell_string_enumeration tElements end try return tResult end spell_suggest_spellings

Here you see a collection of functions and commands which in the current world would have to be implemented in C and loaded in an external but in the new world you won’t need to touch C at all.

Now, the above aspell library is just commands and functions so from LiveCode (object) scripts you’d call them with expressions such as spell_suggest_spellings() – this isn’t very LiveCode-like and isn’t really what we want. However, this is where the Open Language project comes in and I’ll talk about how that project impacts LiveCodeish in another post.

There’s one final thought I’d like to finish with. If you look at the above example script it is clear that it is pretty much the same as the current object scripts we write everyday except for the ‘external’ handler declarations at the top; thus a valid question would be to ask ‘Why not just develop LiveCodeish as an updated form of object scripts and we can just put this stuff in the library stack or backscript?’. This is certainly a fair question if you are thinking within the current LiveCode Framework but that’s missing the point of LiveCodeish. LiveCodeish is aiming to be something much more general than a language which can be used in a LiveCode object script (which requires the notion of a LiveCode object to exist to begin with!), it is a distilled form of current LiveCode syntax and semantics which have no dependence on any framework nor any structures imposed upon it apart from those necessary to be a (minimal) programming language. If all goes well then it will be something that can be used to write the LiveCode Framework itself from the ground-up – we will get to eat our own haggis in totality.

read more
Mark WaddinghamEating Our Own Haggis

From Cocoa with Love

by Mark Waddingham on April 23, 2014 11 comments

There’s lots been going on at RunRev lately development-wise. Maintenance on the 6.6 cycle has been trundling along, 7.0 with its Unicode support is rapidly maturing and in between these two things sits the project that has been my main focus for the last few months – 6.7.

The main goal of 6.7 is to rework the Mac-specific parts of the engine to use the Cocoa framework rather than the (now deprecated, and aging!) Carbon / Classic frameworks. Not only will this allow LiveCode apps to be sandboxed (there are numerous bugs in Mac’s implementation of sandboxing for Carbon apps) and thus submitted to the Mac AppStore once again but it also means that things like revBrowser work much better as the browser control can be embedded directly in the stack window.

One of the most challenging parts of porting the engine to use Cocoa has been (as far as possible!) to retain identical functionality as before. Cocoa is a very high-level framework and as such likes to do things in a very specific way. Given that LiveCode is also a very high-level framework there is a certain amount of ‘creative coding’ required to bend Cocoa around to LiveCode’s way of thinking.

For example, Cocoa will not send continuous ‘windowMoved’ messages (unlike Carbon), you only get periodic updates when the user pauses the movement. This did cause a bit of consternation, but after some googling, some hair pulling and a nifty use of an auxiliary thread and window server interrogation, LiveCode can still enjoy appropriate moveStack messages (you can see the patch for that here https://github.com/runrev/livecode/pull/613).

All-in-all though, the transition to Cocoa has been relatively smooth – a cleaner, more well-defined separation between the platform-specific part of the engine and the rest is emerging and we’re approaching parity with the non-Cocoa feature set.

Anyway I’d better get back to my 6.7 bug fix list – we’re hoping to get 6.7-dp-3 ready soon which will hopefully be the last before we start the RC cycle!

read more
Mark WaddinghamFrom Cocoa with Love

Hi-speed HiDPI

by Mark Waddingham on March 18, 2014 2 comments

With 6.6 almost out of the door (we released 6.6. RC 2 today, for those yet to check) we’ve finally added support for new Retina displays to Mac Desktop. This support means that LiveCode apps will look as crisp as other Mac apps on modern Mac laptops with virtually no work at all (you might need to add some @2x images, but that’s about it). However, this crispness does come at a cost…

 

LiveCode’s graphics rendering is inherently CPU-bound (for the moment at least). This means that every pixel you see in a window is being generated by operations performed on the machine’s CPU with the GPU only being involved at the Window Server end when its compositing the display. Whilst this approach means that we can deliver a much richer set of graphics primitives and effects than the host OS can manage, it does mean that Retina has a huge cost – a four-fold cost in fact (Retina displays are double the density in both directions, so that means for every single pixel rendered on a non-Retina display, there will be four pixels rendered on a Retina display).

 

Now, whilst CPU speed hasn’t suddenly jumped 4 four-fold in the same time as pixel counts have, one thing has jumped considerably – the number of cores our CPUs have. Indeed, all Retina MacBook’s have at two cores with each core able to run two threads concurrently. Within this fact lies a potential short-term solution to speeding up display on Retina systems…

 

The solution I’ve been experimenting with is to split the rendering of a stack up into 4 tiles with a separate thread responsible for rendering each tile. The engine keeps a pool of four threads around on which it schedules this work – as soon as one thread is complete, the main thread takes its work and sends it to the appropriate quadrant of the window buffer (ideally the individual threads would do this last part as well to eliminate the inherent bottleneck this creates, however for the APIs the engine currently uses for window updates this isn’t possible).

 

In an ideal world you’d hope to get a 4x speed up to rendering but due to the reality of overhead in the approach this is purely a theoretical limit. However, that being said, my current experiments indicate that around a two-fold increase in rendering speed should be attainable for large and reasonably graphically complex stacks (lots of bitmap effects and gradients) which should certainly help to mitigate jump from normal to Retina resolution.

 

Of course, this approach isn’t limited to Retina displays, the same idea can be applied to any computer with a multi-core CPU – there you could see up to a two-fold increase in rendering speed which is nothing to be sniffed at!

For C-source-level-interested parties, the code for the experiments I’ve been running can be found on the feature-threaded_rendering in my github repo (https://github.com/runrevmark/livecode.git). The code is Mac-only for now (search for MacStackTile in osxstack.cpp), and there’s a few hacks here and there to ensure thread-safety (patterns are disabled and there’s a global lock around text rendering) but it’s certainly showing promise…

read more
Mark WaddinghamHi-speed HiDPI