ts-7000
[Top] [All Lists]

[ts-7000] Re: Segmentation Faults with Arrays greater than 64kB on TS-72

To:
Subject: [ts-7000] Re: Segmentation Faults with Arrays greater than 64kB on TS-7260
From: "dave_w_hawkins" <>
Date: Wed, 04 Jul 2007 02:15:41 -0000
> WARNING: This may be a novice question..
> 
> In the past I wrote c code for Dos and I could not have an array that
> exceeded 64kB. When I started with TS-Linux on my TS-7260 I thought
> that I might be able to have arrays greater than 64kB. To my
> displeasure I have found that if I use arrays of more that 64kB I
> eventually get segmentation fault errors. If I keep my arrays to less
> than 64kB the problems disappear. 
> 
> Is this normal for Linux or specific to the TS-7000 series? 

There can be several causes for seg-faults like this;

1. Your processor stack size is too small. If you have an
   OS running then the OS might generate a page fault and
   automatically increase the stack size. I'm pretty sure
   bash can be used to increase the stack size used for
   a process. However, its generally bad form to code like;

   void myfunc()
   {
       int big_dumb_array[64*1024];
 
       /* code ... */
   }


   If big_dumb_array is never used, then you can get stack
   overflows that you never notice. In a low-level system
   running eg. uCOS-II, you seed the stack to try to
   find overruns (fill it with a known pattern). However, 
   this sort of array defn would not be caught unless the 
   array was written to. The stack pointer just gets moved
   64*1024 ints, but nothing gets written to the stack.

   If an ISR occurs while you're in this function, then
   wham, stuff gets written to the stack and you've
   got problems. And worst-of-all, they may not be 
   immediate problems.

   I'm not sure what Linux does ... write code with a 
   #define for the array, and then fill it each time.

   I'm fairly certain that the MMU will detect you're out
   of your zone and will SEGV on you. However, its possible
   that you can step 'slightly out of bounds' and you won't
   get a SEGV. That can be the hard thing to debug.

2. Check the assembler.

   It could be that the default compilation mode is for a
   'small memory model', which generally means that a
   register is loaded with a base address, and then
   offsets are used. This is often used for processor
   stacks ... its often called the frame-point. In the
   function prolog, the frame-point gets loaded from
   the stack pointer, and then the stack pointer is
   pushed on the stack. All the function arguments and
   local variables are then referenced relative to the
   frame pointer.

   I can't recall if thats what the ARM code does ... 

   The 'small memory model' is generally preferred,
   as you only use one assembler instruction to get
   to things that might otherwise require 32-bits.
   If you compile for a large memory model, then
   you'll generally get repeated instructions, eg.
   load an address register, then dereference that 
   register. Often you'll take the hit in terms of code 
   size, and speed (for tight loops involving pointers).
   And code size increases can blow the cache and
   cause further loss in speed.

3. Its just bad style :)

   For large arrays use malloc. If you are in an RTOS
   and you don't like malloc, the key is to allocate
   large arrays, and then access them using pointers.

   Using a pointer generates different assembly code.

   Check it out for yourself. Write some C-code with
   an array, and then access it with a pointer.
   
   Here's what I'd try ... (I just typed this in, so
   it might need editing to compile)

   int main()
   {
      int array[10];
      int i;
      int *p;

      /* array indexing */
      for (i = 0; i < 10; i++) {
         array[i] = i;
      }

      /* pointer code? probably depends on optimization */
      for (i = 0; i < 10; i++) {
         *(array + i) = i;
      }
  
      /* pointer code */
      p = array;
      for (i = 0; i < 10; i++) {
         *p++ = i;
      }
      p = 0;  /* will cause a SEGV if accidentally used again */
  
      return 0;
   }

Make sure to compile with optimization off and on to see
the different code generated. Then look at the man page
for the processor to see what else you can play with.

Generally Linux will SEGV when you try to shoot yourself in
the foot. Check out the tool Valgrind. It has lots of checking
features. There's also splint and other lint tools. Using
the g++ compiler will also generate more warnings than the
gcc compiler. And if you can, write and test on x86, then
ARM, then PowerPC, etc :)

Its always worth checking out the assembler code when you
come across something strange.

Here's some stuff I wrote up on the LPC ARM micro which is
equally applicable to the TS. I just haven't had a chance
to boot uCOS-II on it yet ... too busy with real work ;)

http://www.ovro.caltech.edu/~dwh/ucos/project_AR1803.pdf
http://www.ovro.caltech.edu/~dwh/ucos/

Cheers
Dave







 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/ts-7000/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/ts-7000/join
    (Yahoo! ID required)

<*> To change settings via email:
     
    

<*> To unsubscribe from this group, send an email to:
    

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 

<Prev in Thread] Current Thread [Next in Thread>
Admin

Disclaimer: Neither Andrew Taylor nor the University of NSW School of Computer and Engineering take any responsibility for the contents of this archive. It is purely a compilation of material sent by many people to the birding-aus mailing list. It has not been checked for accuracy nor its content verified in any way. If you wish to get material removed from the archive or have other queries about the archive e-mail Andrew Taylor at this address: andrewt@cse.unsw.EDU.AU