ts-7000
[Top] [All Lists]

[ts-7000] Re: announcing tsctl!

To:
Subject: [ts-7000] Re: announcing tsctl!
From: Michael Schmidt <>
Date: Mon, 03 Oct 2011 12:06:19 -0700
On 10/3/2011 6:18 AM, tom campbell wrote:
> Well... count me as an enthusiastic.

Glad to hear it!

> Read through the web page (which I thought was well written at the right 
> detail level).  Seems like an excellent approach.  I'm surprised there's not 
> more chatter about the announcement on this group.
>
> Very, very happy to see Python support.
>
> A question.  Are the canctl, dioctl, and spictl protocols/apis you mention 
> proprietary technology systems protocols/apis? or are they some industry 
> standard?  Are they existing now or futures?

The canctl, dioctl, and spictl protocols already exist and are 
implemented in the corresponding utilities by the same name in selected 
products.  The tsctl and logctl protocols are new for tsctl.

These protocols were developed by Technologic Systems but are not 
proprietary in the sense that they are open, documented, and not 
restricted.  If there is interest in any industry standard it could be 
done, but I doubt it would make much sense in an embedded system to make 
the effort to have a full-blown CORBA implementation. ;)

> When is 7370 targeted?  What can I do to make to happen faster?
> [...]
> What's the license and source availability?

Implementing tsctl support for a new platform is fairly 
straight-forward: determine the  object instances needed for the 
platform, determine how they are related to each other, implement those 
that are specific to the platform, create any platform specific 
regression tests, and run all tests applicable to the platform to test.

I've passed on your interest in the 7370 to my boss, who ultimately 
controls the direction of development.

Is there any community interest in porting tsctl to other platforms, 
such as our older products (TS-72XX, TS-73XX, etc.)?

We plan to make full source available starting at the first release.  I 
am still pressing the license issue with my boss and would like feedback 
from you on this as well including any arguments one way or the other 
that could help me with my case.  There will not be any licensing costs 
involved, but I have been told that the goal is to make the source code 
available to everyone without restricting our customers to have to 
release derivative works.  To this end, one option under consideration 
is a dual license where the main license available to everybody would be 
an open source license such as the GPL with a second license option such 
as BSD available only for customers who are running the software on one 
of our products.

> Only concern might be speed on some hi bandwidth requirements.  I'm sure it 
> will be fine for many applications.

This is one area of special interest to me.   The canctl core code was 
significantly rewritten for tsctl.  On an otherwise idle TS-4500, the 
old canctl code could consistently send around 3500 messages per second 
at 1 Mbps, while the new code can send nearly 6000.  (For reference, the 
theoretical maximum is ~7142, assuming no gap between messages.)

While talking directly to the canctl port yields this high performance, 
using the tsctl port results in lower performance.  I believe part of 
the reason for this is thread switching; if tsctl was to run as a 
real-time process using SCHED_FIFO I think the performance would be more 
comparable, but in my experience this also causes severe 
unresponsiveness.  In any case, this is an open area for future 
improvement if there is demand for it.  In most cases, it is expected 
that the user will use the canctl protocol if they need high performance 
or backward compatibility, and the tsctl protocol if they want to take 
advantage of the new features and flexibility.

> I would suspect that many applications would need to bypass at least the IP 
> layer and/or server side "locking".

If there is no contention, locking is fairly low overhead.  We are using 
pthread mutexes which are very fast if there is no contention.  There is 
a bit of overhead on top of this to implement our locking model which 
prevents deadlocks in some situations, but again, if there is no 
contention this should be fairly fast.  One nice thing about the Python 
interface is the ease with which it is possible to set up tests to 
measure such things.

> To that end I suggest that the server side api to the hardware be "clean" so 
> a user program could talk directly to that api to the hardware (bypassing IP 
> layer). Maybe incorporate a mechanism in protocol to tell tsctl that it can't 
> use subsystem xyz because user is talking to it directly. The design 
> philosophy looks like you might have addressed this.


Yes, there is a kind of layered approach to achieving higher performance 
as it is needed, at the expense of less flexibility.

At the top end, the easiest way to get performance is to identify where 
the bottleneck are, and implement new classes or API functions as needed 
to get around the bottleneck. Already the Bus class contains examples of 
this: it implements functions for setting, clearing, assigning, and 
toggling bits in a single command, to avoid the need to perform 
read/modify/write cycles across the network.  And while for small tasks 
it can be useful to directly access the bus across the network, for more 
complex tasks other classes use the bus directly on the server.

If this is insufficient, a more specialized and low-level solution would 
be to implement an application that directly uses the C API on direct 
(non-network) objects like the server does, and not implement the 
network functionality at all.

At the bottom end, a developer who requires the absolute maximum 
performance could use the object API as a shortcut to avoid needing to 
look up registers in the datasheet, and could instantiate each class 
inline to avoid object call overhead.  This has the potential to be a 
quite involved procedure, as basically it is a transform on each object 
into separate source code with hard-coded functions for each instance.


> I couldn't tell in a quick read whether you were doing UDP or TCP.  I'd 
> support both.  The UDP for speed when it's from process to process inside the 
> cpu.  The TCP for reliability when it's off chip.  Like from the laptop in my 
> office over a satellite link to 7370 in the middle of the ocean. If I had to 
> choose which to do first, I'll probably pick UDP, but I would make sure there 
> is an upgrade path in the API to TCP (or maybe some of the unix internal 
> queues/shared memory for more maybe more speed) which would allow you to 
> implement other transport mechanisms in future without breaking UDP code in 
> the field.

We only support TCP, which was a design choice which predates my 
involvement in ctl development.    It might be interesting if someone is 
interested in creating a server thread to handle some UDP requests to 
see what the performance difference is.


> Looking forward to seeing the code and trying it.  Will you make 
> announcements to this group at signficant release points?

Yes, definitely.


  ______   Best Regards,
|__  __/                  Michael Schmidt
    ||                   Software Engineer
    ||echnologic Systems (EmbeddedARM.com)
    || (480)        16525 East Laser Drive
    |/ 837-5200   Fountain Hills, AZ 85268
        http://oz.embeddedarm.com/~michael


------------------------------------

Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/ts-7000/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/ts-7000/join
    (Yahoo! ID required)

<*> To change settings via email:
     
    

<*> To unsubscribe from this group, send an email to:
    

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/

<Prev in Thread] Current Thread [Next in Thread>
Admin

Disclaimer: Neither Andrew Taylor nor the University of NSW School of Computer and Engineering take any responsibility for the contents of this archive. It is purely a compilation of material sent by many people to the birding-aus mailing list. It has not been checked for accuracy nor its content verified in any way. If you wish to get material removed from the archive or have other queries about the archive e-mail Andrew Taylor at this address: andrewt@cse.unsw.EDU.AU