All posts by richard

Promiscuous Ports, Power and unstable PPP

Well the tracker system went out on the road earlier and it was an unmitigated disaster. Mainly due to power issues, the PI2, relay board, controller board, GPS, IMU and GPRS board draw a fair bit of juice, and its more than the poxy USB charger can supply. I’m frankly not that surprised.

It highlighted that it doesnt take that much to start making things trip up power wise, the system does have power monitoring and I may start looking towards some more intellegent form of power management. On top of that automotive power is dirty, its ful of noise and spikes and you can only gurantee it’ll be somewhere between 11V and 15V which isnt good for electronics. Add to that a 20 year old dirty cigar lighter socket, a cheap chineese PSU running at/above its limit and all hell breaks loose. First up then we need to get things cleaned up. I have a sketch for a PSU on my desk but that needs refining as it’ll form part of a new board to replace a goodly chunk of the wiring. For now I’ve gone over to a 2A LM2596 switching regulator module and thats helped with supply stiffness and clenliness. Behind this are two nice reservoir caps, enough to give everything about a second of power to cover brown outs. Its possible I may need to look into batteries however the final unit should be wired to the battery directly which should protect it from load shedding devices on crank over. Ideally I’d like the M25012 to hold the power to the Pi off till it knows all is well. The system now supplies the power for the monitor as well and has the ability to isolate it, this should help,

Next issue, the bloody TTY’s keep moving. It seems that unlike Windows USB devices are always installed in the order they are found, every time, Windows remebers where a device is and all is well, for example, plug in two USB com ports and windows will splat them on COM3 and COM4, every time those devices are in those USB ports they will from henceforth ALWAYS get those IDs. Linux, or at least Rasbian, doesnt. It seems to assign them on first come fist served basis which means that its luck of the draw if you get ttyUSB0 for the first device or ttyUSB1. Moving any USB device around before a reboot seems to muck this up and this isnt helpful.

The answer seems to be here: http://noctis.de/ramblings/linux/49-howto-fixed-name-for-a-udev-device.html although note that udevinfo is actually a symlink whereit exists and it wont work on Raspbian, you want udevadm info instead 🙂 Its also possible to use PID/VID and serial numbers to do this. Now as the final will use either a hub and 4 devices OR a single 4 port I’m anticipating further pain with this 🙁 However for now the bridge and GPS adaptor use different VIDs and PIDs so the following works :

tail -f /var/log/messages and plug each device in, make a note of the PID and VID (they are displayed)

sudo nano /etc/udev/rules.d/51-usbserial.rules and add:
KERNEL==”ttyUSB[0-9]*”, ATTRS{idVendor}==”1a86″, ATTRS{idProduct}==”7523″, SYMLINK+=”trnet0″, GROUP=”tty”, MODE=”0600″
KERNEL==”ttyUSB[0-9]*”, ATTRS{idVendor}==”067b”, ATTRS{idProduct}==”2303″, SYMLINK+=”sgps”, GROUP=”tty”, MODE=”0600″

You can see where the PID and VID need to go. Save, exit and unplug everything. Once you plug them in the symlinks you chose (In my case trnet0 and sgps will be created in /dev. These simlinks will always point to these PID and VIDs. Now with identical devices this wont work, there are other ID’s you can use such as serial number although these need to be programmed using the vendors utility.

PPP works but there are some issues. Nameley right now we dont know if its up or down. I need to work out a way to deal with this and maybe some scripting to keep it up. I have now worked out how to get control of PPP and know what is going on. First starting and stopping are easy. Make sure your application is running with priviliges to be able to do this, adding the user to the ‘dip’ group should be enough. Then you can use http://wiki.freepascal.org/Executing_External_Programs#Run_detached_program to run pon and poff. To specifiy the profile use process.addparameters.add(‘<profilenamehere>’); and normally you’ll be wanting to run /usr/bin/pon or poff.

Next up modify /etc/ppp/ip-up. Before the line PPP_TTYNA ME=… line add:

echo $4,$5 > /tmp/pppup.sem
chown pi /tmp/pppup.sem

The chown assigns access to the pi user, you could give the rights to anyone. Now once PPP comes up pppup.sem will be dropped in /tmp/ the $4 and $5 will mean the file contains the local IP and gateway IP seperated by a comma.

In ip-down add the line rm /tmp/pppup.sem in towards the top and this will remove the file when the link goes down.

It may be worth removing this file if present at boot as a crash/power failure may leave it behind.

 

 

 

Tracker Troubles

This one is more notes to me, but someone may find it handy.

First up getting the test suite on this tracker/remote control system was not as clean cut as it should have been. Turns out Lazarus sockets doesnt do DNS so you cant just call a hostname, rather than bombing out with an error it just hard locks or ocasionally segfaults. TWSocket can handle this and as I want to get as close as possible I think there is going to need to be a wrapper around this. I also think a TpersistantSocket may be of uses where the socket has a FIFO and will attemnpt to reconnect on loss. Much thinking on this.

Next up, hours lost trying to work out why Traccar wont talk to the diag software when it was up, turns out that I discovered two things. Firstly, had I looked at the NMEA spec and paid attention I’d have seen that the spec uses * as a reserved char for the checksum and it isnt supplied in a new field as I was doing. This was leading to a ‘,’ being added to the ID and Traccar (being helpful) Silently bins it and the packet. Traccar is going to be going under the knife when this is all done. Secodnly, nothing in the world either a) agrees on how to make a checksum and b) actually uses it so it can be ignored. Traccar assumes its garbage and bins it. Save yourself some work when coding for the T55 protocol and dont bother.

Next up, PPP

Loads of tutrials on how to make this work, how to talk to the SIM900/908 and none of them seem to work. I had this workin before I wiped the SD card annoyingly so back to work. I ended up using PPPConfig in the end, WVDial doesnt seem to like these modems and indeed, wont detect them. Most PPP configs almost work failing with errors that no one have ever seen SO FrankenPPP was born. I’ll pop the details up later because I know others are trying to do this. In short, run through PPPconfig and then trash the resultant CHAT file, its broken, something in there makes the SIM900 fall over with an odd error message. Its likeley a CRLF where it shouldnt be, a missing ‘ or something I just couldnt see. Roll your own or use the one I paste and all will be well.

Next up, default gateways. Theres a touc of open source snobbery going on here. ‘Unable to Replace Default Gateway’ will be in your logs and nothing will work. Ask online and the standard response is ‘Yup thats right BECAUSE YOU ARE AN IMBECILE AND HAVE A DEFAULT GATEWAY!’

Thats all the help you are going to get.

Not helpful, you see MOST boards are going to be setup like this and this unit will be chipped and changed from LAN to WAN frequently so this needs fixing. I havent looked at the scripts yet but there are up and down scripts. Firstly I need to drop a semaphore anyway to give me the interface state but also this is a good place to fix this gateway issue. O2 (whom I dont reccomend for this) use Private IP addresses for their network (3 and EE dont for sure) so be aware of this. Do an ifconfig and see what gateway ppp0 is giving.  For me it was 192.200.1.21.

Run a terminal and these two commands will fix it…

sudo route del default
sudo route add default gw 192.200.1.21 ppp0

This now means all will go out the PPP interface. Now this isnt helpful when it all falls on its face so when you are done do it again to make it right for your network again (I’m 99.0/24):

sudo route del default
sudo route add default gw 192.168.99.1 eth0

 

Having been out and tested this it seems that this default gateway silliness is caused by DHCPCD, however its entireley correct as you’ll want to get on the net and play about while you are working and thus you need this route here. However if the board isnt plugged in, DHCPCD doesnt even try, thereis no default route and PPP behaves as it should. It now makes me wonder if I even need to do this and shouldnt just make a script for when I’m in dev mode.

These should go in the PPP up and down scripts, but I’ve not gotten there YET

 

P1 3.5″ Touchscreen

This is just my notes on getting this up and running. I’d purchased a 3.2″ touchscreen to work with a Pi used on a project. However they supplied a precopiled ISO image rather than drivers and instructions so this is my notes from taking it apart to work with my own image. First up, get a keyboard on it and open a terminal (this is horrid with such little screen real estate)

sudo apt-get install xrdp

Will get the RDC server onto it then you can use remote desktop to get in there, much nicer and easier to use. user is pi and password raspberry (the defaults)

First going after config.txt, it seems there is nothing in there, how annoying.

cmdline.txt does seem to have some extra bits :

fbcon=map:1 fbcon=font:PrFont6x11

This looks like its worth investigating. This seems to be passing info to the FBTFT driver, there’s  some info here. This also identifies this as the BCM2708 SPI Display controller and if you look at the docs that line is given as needing to be added. SO if we look through that driver info we should be able to find where the display is setup…

So we now need to look at /etc/modules. We might even find the touch driver in here too…

This file is VERY different to the stock file. The first, obvious changes are that SPI and I2c are enabled, I’d imagine the corresponding blacklist options have been removed too. Now we have lines for FBTFT, FLEXTFT and ADS7846_device all of which are part of the above site. This is handy as it means likeley all you need is here. In theory as the modules have been merged into the main tree you *should* have these already.

So the lines we need seem to be:

fbtft_device name=flexfb gpios=dc:22,reset:27 speed=48000000

flexfb width=320 height=240 buswidth=8 init=-1,0xCB,0x39,0x2C,0x00,0x34,0x02,-1,0xCF,0x00,0XC1,0X30,-1,0xE8,0x85,0x00,0x78,-1,0xEA,0x00,0x00,-1,0xED,0x64,0x03,0X12,0X81,-1,0xF7,0x20,-1,0xC0,0x23,-1,0xC1,0x10,-1,0xC5,0x3e,0x28,-1,0xC7,0x86,-1,0×36,0x28,-1,0x3A,0x55,-1,0xB1,0x00,0x18,-1,0xB6,0x08,0x82,0x27,-1,0xF2,0x00,-1,0×26,0x01,-1,0xE0,0x0F,0x31,0x2B,0x0C,0x0E,0x08,0x4E,0xF1,0x37,0x07,0x10,0x03,0x0E,0x09,0x00,-1,0XE1,0x00,0x0E,0x14,0x03,0x11,0x07,0x31,0xC1,0x48,0x08,0x0F,0x0C,0x31,0x36,0x0F,-1,0×11,-2,120,-1,0×29,-1,0x2c,-3

ads7846_device model=7846 cs=1 gpio_pendown=17  speed=1000000 keep_vref_on=1 swap_xy=0 pressure_max=255 x_plate_ohms=60 x_min=200 x_max=3900 y_min=200 y_max=3900

Thats quite a mouthful so there is a copy of the working modules file. Check it against yours and copy and paste whats needed in rather than do this all by hand. If you are going to edit this on a Windows machine please use notepad++ as it’ll save you a lot of pain.

Interestingly it looks like there are dfeined setup modes for other displays here such as the Adafruit one which makes adding things way easier. It may be worth looking into this as looks way easier than the above mess.

Next up we need to chek inittab…

Interestingly they havent made any changes here. This should mean that the pi’s main console is still intact. I dont have anything on my desk to plu into right now so I cant verif this but it does leave a way into the system is something breaks. There is a reccomended line but it does look like it would disable the main console. Now the quesion is, where is whatever is running on that screen coming from? The other place would be rc.local and its not from there. The next step now is to make these changes on the dev system and see what happens.

Getting X running isnt too hard now. You’ll need to move the fbturbo framebuffer somewhere else. If you dont X will just bomb out with no screens found.

mv /usr/share/X11/xorg.conf.d/99-fbturbo.conf ~

This will drop it into your home dir. You’ll want this to re-enable it later.

export FRAMEBUFFER=/dev/fb1 startx

*should* now get you an x-session. Have fin with that touch screen as its probobly way out of whack 🙂 CTRL+C and come out.

mkdir /etc/X11/xorg.conf.d

My stock Raspbian didnt have this directory, the updated did so you might not need to do this. In there create 99-calibration.conf and put the following in there…

Section “InputClass”
Identifier      “calibration”
MatchProduct    “ADS7846 Touchscreen”
Option  “Calibration”   “215 3735 3938 370”
Option  “SwapAxes”      “1”
EndSection

save and then try to start X again. With any luck you should have a working touchscreen. Note that while this is all pretty similar to the Adafruit unit it’s different enough that if you use their walkthrough it wont work.

 

The PC Market, Going forward in 2015

I was asked earlier where I thought things were going with the home PC market. With the announcement of free Windows 10 Upgrades for many it’s obvious that now even the big players think things are about to change so heres my thoughts. Jump in and belt up…

7457397_3_l

At Home:

The landscape of home computing has changed in the last couple of years and still is changing. Cheap, powerful portable devices are becoming ubiquitous and are replacing the humble PC and Laptop and this is changing everything. Desktop sales to domestic users have been slowing down quite substantially for a number of years, this isnt news, most sales now tend to be to people that dont want the hardware lock in of a laptop. This really means Gamers, Tinkerers and those with specific needs for a tower, most home users are now on laptops.

Smartphones and tablets are increasingly replacing these laptops in day to day use. Why fire up a laptop when you already have a booted phone, these devices are getting more and more powerful and integrated with other aspects of our lives and are starting to push the laptops out now as well. Web browsing, doing email and online banking is pretty much it for a lot of the machines I support and all but the most basic tablets can cope with this

On top of this its all gettting cheaper. I was asked about repairing a £90 tablet the other day, after hunting the parts were found at £55 and then as the thing is glued together, I need to allow for a good hour of work. At this point its not worth doing so in the bin it goes. With laptops falling rapidly (I didnt say GOOD laptops) we are heading into disposable territory and to be honest the design of the newermachines dont lend themselves to repair anyhow as we are returning to the days of proprietory non-upgradable hardware which is somewhat ironic.

So where is this going? Well for the dozens of small PC vendors around its not good. The box shifters are doomed as they stand. Of the 4 PC vendors in my imediate area I dont expect any to survive based on the above. All 4 will find themselves fighting harder and harder for a smaller market share and with exclusivity deals gong on with the big players for tablet hardware and phones its not looking great.

But there’s still Repairs right?
For now, however we’ve seen this happen before. Rewind to the 80s and 90s. A TV or VCR was a costly investment back then and a very healthy economy existed of repair shops, authorised dealers and A/V shops (This i where I started out). As the trend for cost reduction kicked into gear at the end of the 90s and start of 2000, these items started to get cheap. All of a sudden it wasnt worth spending £100s on having your VCR cleaned of the jam sarnies your sprog had fed it and the second genearation tech was already on notice of obsolecence anyway. In the space of just a few years almost all of these shops vanished, those left hanging on for dear life fighting over the few loyal customers that remained. Then, about 5 years later prices crashed to the stage we are at now where that loveley 42″ screen you are sat in front of and that cost you £400 actually needs a £600 panel if you break it, its not worth it and most electronics goes in the bin. PC’s are going this way now.  Take something like the godawful HP Stream thats all over the TV. You can get this thing for £179, for whats in there thats impressive. Now, you buy one for Billy and as children do, he breaks it. SO in you come and you are informed that the screen will be about £45/50 and the labour about the same again (thats actually a pretty cheap price) and hang on, thats the 50% rule broken and you may as well replace it. Your insurance co wont repair it and god knows what else he’s done to it.  So you see, we are well on our way already. Once this reaches its logical conclusion there wont be much point in doing the tablet repairs either, to make this all viable as a business you’d have to do so many at such a low cost you’d end up making pence at a time and would have to run so close to overload its just not worth it.

From the domestic point of view I think our days as small PC vendors to home users are numbered, maybe not in 2015, but soon.

Business

This is a slightly better story. In short things arent going to change here that quickly. Businesses like continuity and often spend a lot of money on keeping this so. Business desktop sales are actually picking up as are server sales but its sadly not all good news.

Businesses are looking increasingly to save more money and IT always takes the hit, and in this case its set itself up for it. With increasing convergence of technologies that were once totally seperate such as telephony, paper filing etc, simply dropping a server in and going home isnt going to cut it anymore. Businesses are looking more and more at tying everything together and it seems that among the SME IT vendors the skills to do so arent commonplace and contracts are being lost because of this. It used to be that if you didnt know how to do something you’d be able to find a way round it or as was done by some businesses, simply say its too expensive or cant be done, this didn’t wash at the time and certainly wont now.

Many businesses are now whole heartedly embracing technologies such as cloud computing, roaming profiles, metwork storage in way they wern’t a few years ago. The knock on from this is that the desktop machine is becomming more and more of a thin client, now for some of you this will be familiar. A lot of people went down this route early 90s and before in the shape of X-Terminals. Although they worked, the PC’s overtook them pretty quickly although this didnt stop Larry Ellison for Oracle saying we’d all be using thin clients in the future. He took a lot of flack for that, and it turns out, he was probobly right. With the massive increase in computing power and network performance over when we first went this route its almost becoming a given. A PC in an enterprise network will probobly log into a domain, it’ll do nothing more taxing than office or web browsing in most cases and it’ll be working with al its files and user data stored on a server. This is nice from a support point of view and backup point of view and to be honest there arent many good reasons for it to be done another way. The upshot is, all thats doing any work is the processor, and if we virtualise that machine you can have a slice of a much more powerful CPU on a server. and thats where we are headed. This is bad news for the PC because of the current crop of thin clients, only a handful could be called PC’s and the rest are normally ARM or MIPS based, closer to the Tablets and Smart TV’s than a PC. When a thin client goes, you throw it out. Theres no software to toubleshoot just the hardware.

meteorite-dino

So where does this leave us, well lets draw some conclusions:

The PC landscape is changing, this change is acceleration and will drive the demise of the PC as we know it now, this *is* happening now and would almost certainly form part of Microsoft’s decision on Windows 10. PC Desktops have the same fate as our friend above, he’s dodged the bullet, but it’s still going to get him. Laptops wont be far behind. Its worth noting though that the main opposition to PC’s in the form of Tablets and the ARM ecosystem offers nothing that can displace the hardcore games, though it is coming. Ultimately Win10 will continue the march towards SAS or Software as a service and for that goal to be acheived Microsoft need to break the connection between Windows and the PC

In business the change is still there but its slower, there is always going to be some requrement for that little bit more power than a thin client offers and PCs will hold on there for a while. The amount of investment will draw out the death of the business PC a little longer BUT bear in mind I can pick out all of my larger corporate customers and have them on thin clients without any apreciable loss of productivity in a few days, the change may be faster than in the domestic market.

Its interesting to note that there has been little technical drive from intel of late, most changes have been incremental, the same could be said of AMD, almost as if Chipzilla and AMD are in a holding pattern. AMD seem to be hedging their bets, not only are they headed down ther APU route and the tight coupling of GPU and CPU lends itself quite well to a change of CPU core but they are also ouring a lot of money and development into seriously fast ARM chips: http://www.amd.com/en-us/press-releases/Pages/64-bit-developer-kit-2014jul30.aspx There is also very little speculation or indeed any information on what Windows may become after 10 or indeed if there will be anything, the money maker here now is Office, MS made that abundantly clear.

So, the box shifters that are still out there need to shape up and change focs, be aware that the golden goose is about to buy the farm. Its time to lead and innovate and find better ways to engage and serve your customers. With lock in deals with larger ventors the playing field isnt very friendly so now more than ever you need to make yourself stand out and move away from relying on PC sales and repair as a core business activity. Find a niche or at the very least plan an exit strategy.

I’m not saying that the sky is falling here, we all know it is going to and its going to soon. I’m saying start planning and be ready.

 

 

Learning Curve, well right angle…

I’m working on a tracker system for a customer. Now my C++ isnt that good and my ARM assembler isnt up to somthing quite so heavy (Its not JUST a tracker) so I looked at other platforms to use.

In the end I’ve settled on the Pi, not least because its not a huge stretch to do what I need to to in the basic programming tools available.  I need to open two serial ports, fire up PPPD and then do some rudimentary data processing and split one of the serial ports.

After some fiddling PPPD behaves and the pi can connect to the Internet on its own over GSM. Non trivial but PPPD is an old foe and it was beaten into submission. The hardware was built and tested and the whole thing mounted up on a plate ready for some serious dev work…

During the tinkering I found out that there was a working Pascal compiler, I know Pascal, I know it VERY well and can do a lot with it, then I found the Delphi like IDE, Lazarus. In a few mins I was away. Turns out the repository versions are very old, cue having to download and compile from scratch.

This didnt go entirely well and took 8 hours and many false starts but in the end I got there, more faffing and I have an x86 dev box that actually compiles in under a week.

A fight then ensued to get workign serial port access and I *think* I’ve done that so now I have the unit talking to the GPS already.

Asynchronus TCP… Thats taking longer…

This does open the way to a lot of porting of my apps over to Linux, something I have been wanting to do for a while. It also opens up a serious number of possibilities with Pi and Beaglebone for us. So many projects opened up.

 

Tracking & Dispatch

commingtogetyou
We’re coming to get you…

I’ve been asked to look into a tracking and dispatch system for a customer. Now those of you that know me/my company know that we’ve done this on a number of ocasions, the most recent version being Touchdown’s Mesopod system.

All of these units have used dedicated functional units to do each step, a GPRS Modem , A Firewall/router (The pod combines these), and a client device. Now for this one we are going to try something different…..Its all going in one unit.

The design breif we have pulls in a lot of old tech for us, GPS, 3/4G networking, Sensor systems, Chassis electrical interfaces, message passing and databse applications, in that lot there is no scary new ground. Its the client.

A typical system like the Mesopod uses a BSD/Linux based Firewall system. Off the shelf, free and pretty simple. We’ve not bothered ourselves too much with this, it’ll also look after the VPN back to base and in the past its almost always been an Atom based unit of some kind. The Client? Win XPe, the only version of XP tht we can still use. But it all works. We have a huge toolbox of apps for sharing GPS units, grabbing data from our boards, providing multiple serial streams and so much more. Theres only one big problem…We are going ARM on this one.

ARM-powered-RGB

This means we have a problem, in fact, a very large probelem. SO what software in our toolbox to we have to use here…erm, stuff all. Thats right girls and boys we have NOTHING and worse, theres pretty much nothing off the shelf we can use either.

This isnt as bad as it seems to start off with. You see the first step is we will be using Debian Linux. Right off the bad I can talk to the display, anything I need to hang off of I2c, databases, the GPRS modem, touchscreens, GPS modules, ethernet etc so thats a HUGE swathe of work done. As a comparison the last time we used ARM we programmed it in assembler/C and just getting the touchscreen up took a few days. Here I had the target system up and browsing the net in 30 mins. So the first part of this adventure is wiring everything up and making it all talk, I can do that, I am good at that…

Original: http://xkcd.com/730/
Original: http://xkcd.com/730/

Now the trick is going to be more scheduling and priority, learning how to break other peoples code and ah yes, I do need to write a little of our own for the client too. I’ll be using Chassi s interface boards I already designed to find out whats going on with the veichle and to turn things on and off. I will need to write drivers for this and I’ll need to find another serial port as I’m going to be using the ones I have already, in fact I already have a board of FT232’s in my minds eye.

Its going to be a bit of a challange. The idea is to end up with a nice small box and a 7/10″ touchscreen that chatters away to the head office and can talk to GPSGate server niceley too and just tie up all the bits niceley. Its also going to be our first project with any amount of Linux coding involved so that will be fun too 🙂

 

Sky+ HD HTPC/Media Centre Build PT 4

If you’ve not read the other parts, please look here: Part1 Part2 Part3

The boards arrived from OSH Park this morning. I’ve had a look and there may be an issue with the pads for the USB controller. It looks like a pad has ‘bled’ but there is a tiny bit of clearance so running a scalpel in there may fix it. Its also clear that the FT232RL was a bad choice, it is on a stupidly fine pitch and I can see it being problems. As it is it’s going to be a paste and relow oven job. Now I could be lazy and simply pick up the pads and use a TTL convertor however this means I need to either internally mout a USB socket or strip off the USB Plug. I wanted to keep this all professional looking so thats out.

The parts were ordered today so we should have something to build and flash tomorrow, here’s hoping.

Now, two considerations now. There were two bits of code that needed looking at, the power LED and power control. With the mock up the PIC now boots, grabs control of the serial line, sends commands to the display then goes back to the PC. There seem to be two issues though.

1) The panel ignores the PIC. I can only assume this may be a result of me using diodes rather than an OR (I didnt have one) and will be ok on the new boards. However looking at the timings I did notice that there is a definate difference between what my PC thinks is 19200 and what the PIC thinks it is. Theres a tiny amount of error in the PIC timings and its possible parasitic capacitances, bad routing etc on the breadboard is causing it. I havent scoped what is going up to the display so there may be other nasties there, the analyser just says on or off. The power LED was a nice extra but the machines do boot fast enough. I had intended to use the blue rings to indicate fault conditions but never mind.

2) The remote LED no longer works with the front panel buttons. Its not a problem but I’d like to know why. The thought is that the PC is replying before the PIC has finished processing and may be causing a funny with the serial routing selects.

Power control. This still isnt done but will get done tomorrow (or in a minute). This is simple. The board should power up and wait for ‘n’ seconds for the ATX 5V rail to come up. The board should automagically do this but I’ve noticed BioStar boards dont always. After this delay an ATX power on will be sent via the relay and it’ll wait again. All being well the 5V line will come up and off we go. If this doesnt happen the hope was to display two halves on the ring () flashing to say that something went wrong.

Once the system is up pressing the power button will send a sleep command via the driver to Windows. The syste, should then hibernate. Holding it down will force an ATX powerdown. When the PIC sees the ATX 5V line go low it’ll then wait for another power buttone press to pule the power switch OR for the 5V line to come up on its own signalling the machine was woken up.

I have to code this but with simple flags it wont take too long.

I’m also thinking about using a boot loader here too. There is a lot more I could add to the firmware but right now it just needs to work. A simple botloader would enable updates to the firmware when the driver application is updated. The micro does receive serial data from the panel, PC and the IR receiver and being a PIC16F870 theres a lot of room for things like timers, remote code and the aforementioned watchdog too.

Almost there….

Linus VS Win Server TCO – The reality

While talking with a Collegue yesterday the issue of their server came up. The IT guy has, and I cant fault the logic, gone with Linux and Samba to power what is essentially a Windows Network. This would be an ideal situation except for a small number of flies in the ointment. At the most basic Level, drivers, are the first hurdle. Te Windows drivers include support for all manner of offload engines, acceleration technologies etc. The Linux drivers mostly lag way behind the Windows ones, but they are out there as a rule. Just to get the bare server up and running to snuff can take a while. So starting from a bare server we have a TCO at this point of £0 for Linux and £450 for Server 2012. Now if we run on from here assuming you just install and go you’ll already be having problems. You’ll be seeing performance hits on Network and RAID hardware, easilly fixed with drivers however we are now talking a good hour or two fiddling and faffing with the Manufacturer Linux drivers (if they exist) Bar just running a CD under Server 2012. Say your time is worth £50 an hour and the drivers arent quite right and thus a build from scratch is needed, two hours, we stand at £100/£450. From experienc 4/5 hours is closer to reality as there will be missing things you dont find. On top of that we now have to get Samaba working on this server, Domain config with Samaba isnt fun and the documentation is poor,  we can easilly have blown a day by now so thats 8 hours or £400 vs £450 TCO on the software side. Now heres the nasty bit. That Linux server could EASILLY take you up to a day to get working to a useable extent. The Windows server, from bare metal to running a domain, 2 hours tops. being far we will allow for that two hours so £400/£550. SO excellent you saved £100, probobly shaved a few days off your life too. Only its not done. As a rule AD and Windows *just* works at the basic level. You’ll now be looking at 5 mins per machine to connect the PCs.  Samaba, prepare for permission and authentication hell, possibly on a per PC basis. In short unless you are a Samba God you wont get it right first time and file permissions are the usual cause.  In short, the Windwos Side could be finished in 4/5 hours total from start to users loging in with Roaming profiles and you could Probobly have exchange going too in that time. With Samba you may end the day at ‘Almost works’ and I’ll fix the niggles tomorrow. This is a ‘bit of string’ situation but already there is a disparity here. The Windows server is done and working. I’ve gone home and am Reasonably happy about a productive day, the Samba installer is probobly not having such a good day.

Now lets look at he basics for what we might want to do with this server.

A small office I’m probobly going to want IIS for various tasks (It failry central to 2012 so not having it isnt an option TBH) SO I have a web server and a scripting engine out of the box. I have a pretty useable and simple to use user administrator and unless I install Server Core (and even then not neceserilly) I can manage users, groups, email routing, web setup, replication and indeed about 90% of basic office tasks and be done by the end of that first day.

Linux, well we’ve fixed Samba. I now have to Install Apache, almost certainly some flavour of PHP, probobly MySQL (We do on Windows anyway) and then start installing admin interfaces. Now most Distros do come with various bits and bobs there already, however you’ll find apps start asking for newer PHP versions, or extensions which need newewer (or older) versions, it can get knotty fast.

Its not unfair to say that the windows server would cost you about £800 and be done on day one. (I’m not factoring in Exchange cost, the reason being getting the same functionality for Linux is hell/impossible)

Linux wise we are at two days, so again £800 and will have a reduced level of functionality. Now theres another big trick there many miss. That extra day? Well theres your £400 but if this is a live or new setup you could be having to factor user downtime in there too so lets look at that.

Users cost less than consultants, how much yours cost is down to you but if we assume the following. You average user is on £25K There are about 250 working days per year, so thats anice easy £100 a day; for an 8 hour day thats  £12.50 and hour. Think of a small office of 10 users (Yes I’m keeping the maths simple) so thats costing you £125 an hour to pay them, now its about to go wrong…

On Monday you’ve scheduled a server upgrade, you are doing it yourself and lets say its a complete refresh and start over. Your users have been told no PC use (its a little contrived but it DOES happen in small business). So for Windows we were down for a day, thats £1000 in lost time. You know where this is going so we have £2000 on the other time. Its interesting to note that the extra £1000 would purchase a low end Windows server!

Its not hard to see the Linux Admin over a day behind. If thats so its all gone wrong now as the free server has cost heading on for twice the paid one.

Its not all gravy though, troubleshooting MS products can be an issue, debugging and logging with MS products can be lousy and that can slow down finding some issues. There is a very twisty mindset required sometimes, that said. MS will help you if you call (within defined limits)

Its all up and running and now we need to think about day to day use. Adding a new printer and sharing it can be done quite feasably in a few minutes on a Windows server and this includes making sure users get to it with no work from them, dirver issues etc. The whole thing can certainly be comfortably wrapped up pretty fast. Linux offers no such mechanism and indeed its even possible that there are no native drivers for you printer, with a USB device this could mean a lot of work.

So to sum up, think long an hard about what you want to do. If you have plenty of time and it has not associated value (You need to look at the way you use your time!), want to use older hardware, maybe only want the absolute basics of a Windows network  and have low blood pressure then go for it. There are situations it is exactly the right tool. If it *has* to work then it may be significntly cheaper to go Windows.

Now to stop the flames, we normally deploy both. In fact we often use Samba for slave/backup operations and most of our sites do have at least one dedicated Linux system. Some things *do* work much better, often the server will be providing a web backend or database services to Windows (Access can use MySQL). Right tools for the right job. I also know I’ve simplified a LOT in here, but the numbers do stack up if you sit down and work your project out on paper you’ll see similar results, at best the Linux option may well cost you as much and deployment methodology can make a big difference but its unlikeley the TCOs will ever be better than eaqal and the £0 TCO of Linux is a myth for business. Businesses rely on saving money as a rule, if Linux were a magic bullet Windows would have no market.

 

OSS Rant AKA Its been a month since I last tried to make an OSS app work

Well those of you who know you will know that I live by the rule, right tool for the right job. Sometimes though it turns out the ‘tools’ arent exactly what you expect.

My expcectation of a application is simple

Install

Setup

Run & Use

Windows does this pretty well but sometimes Windows isnt the right tool, in this case I have a little ARM8 system I want to use. This means Linux. SO OS installed, RDC on there, X setup, all going well. Now todays challenge is Sat Nav. After some diggin it turns out I want Navit. Theres an Android APK for it so I decide to try it on my phone first. The reviews arent great and centre on UI issues, pretty typical of a lot of OSS apps is a poor GUI that makes sense to the Devs or was an afterthought. If this works I’ll be doing major overhauls on the GUI so I dont care too much. Download it and install it, so far so good. OK, it needs maps so off we go to get the built in downloader….oh dear

To say is demented is not doing the full horror justice. On the surface it *seems* ok. Scroll down, Select UK and off it goes. Its a big file so I decide to do something else so the phone doesnt sleep, big mistake. The second I try and drage the status bar down the download aborts. and then wont restart. I try the England maps and try again, Menu key this time..blam, download abourts, wont restart just saying theres an error. Off I potter to the maps folder and remove the temp files. Its happy again I start the download again (350Mb) and go back to doing what I was doing. Blam, it dies again, this time because of a wifi glitch. Once again I have to clean out the folder and try again. in fact 8 times it took to get the map and then I had to do it again. Select a dowloaded map, touch the screen and it removes it, no confirmation dialog or warning it’ll do this!!! FFS Guys!

Map is downloaded and supposedly installed, excellent, only nothing happens. In fact the app does sweet FA untill the phone gets a fix. It doesnt tell you this, just shows a white screen with half the icons missing until you tell it to look or find an address (I wont start with the search function!)

Once its up and running it does look like what I need. however there is this complete lask of finesse. You dont have to look far to find this elsewhere. Two big ones, Apache and Samba, ok text based setup but high Voodoo can make more sense for some tasks. The config interfaces are inconsistant leaving mines for newcomers to step on for no reason.

Here’s someone else’s explaination of Apache…hate_apache

There is a lot of OSS stuff that *just* works. Its sadly outweighed by the crud. I say crud but in my experience if you can make these apps work they normally do what they say on the tin and do it very well.  It seems there are a few routes to trouble.

1) I need to do X, I made it work, now to share it with the world!
This seems to result in software thats way too closeley tied to the dev’s hardware, often has no docs or what is there is poor and MAN is it easy to break. If I’m going to throw something open to the world I *always* make a point of putting a good amount of testing/error checking in there and at the very least make sure it runs on something else other than my desktop.

2) Feeping Creatureism
This is an easy one to fall to and anyone that writes software has seen it and done it. You start with an app to draw an OSD on a screen. Just a bit of green text at the bottom is all you need. Halfway through you realise you could put it anywhere so you code in that, then you think about graphics, so in that goes, then you could allow color changes so in that goes. The basic requirement at thi point doesnt actually work and often ends up being a rush to make it all work just well enough and projuces code rife with odd bugs, kludges and spaghetti code. Your simple program is now many Mb ond requres a degree in quantum mechanics for anyone else to follow.

3) Boredom
You’ve written your app now you want to play. Maybe you did the right thing, made the basic app work avoiding 2) and you want to go add bits, thats great. However you’ve not documented anything and the UI makes sense to you and no-one else. You’ll go through 4 or five versions and then you’ll get bored and stop working on it. These tend to be the apps that people find and start jumping for joy because someone else is trying to solve that exact problem. This normally ends up with the app being impossible to configure/use or works so very nearly bar a tiny bug. Because the dev got bored that bug was never fixed and because they didnt document anything, no one else knows how to.

There are many, many more causes however a lot of this is mitigated under Windows and this does Linux a lot of damage. If I write two apps to do the same thing, the windows one will take a fraction of the time, it has a GUI, I rarely need to think about dependancies as *most* Windows systems are equal (I use Delphi so none of the DLL dependancy rubbish of VB/VC apps which *still* isnt resolved). Unless I’m hooked into something like a driver my code will run as it. The workflow in creating a Windows app makes it harder to produce an App with a useless GUI too. Under Linux I have to think about so much more, so many more steps. I need to think about depanancies and unless I’m mad and have forever Binaries arent going to work because every install is different. I could compile for Distro X version n but the next fork/version/update will likeley be different and will break it. This means testing becomes all but impossible too. I can test on Win XP/7/8/10/Server 2008/2012 inside of 15 minutes. They are all here as VMs and for most cases I can safeley assume that if it behaves with 8 and Server 2012 I’m going to be golden.

For Linux and OSS to be seen as a serious alternative this all needs to be addressed. Through design Android has *almost* solved the issue, if it wernt such a pain thats where I’d be going as a business for our embedded stuff (and we still may do)