tungwaiyip.info

home

about me

links

my software

Media

Yucatán Photos

St Lucia Photos

Photo Album

Videos

Blog

< July 2009 >
SuMoTuWeThFrSa
    1 2 3 4
5 6 7 8 91011
12131415161718
19202122232425
262728293031 

past articles »

Click for San Francisco, California Forecast

San Francisco, USA

 

ctype performance benchmark

I have done some performance benchmarking for Python's ctypes library. I am planning to use ctypes as an alternative to writing C extension module for performance enhancement. Therefore my use case is slight different from the typical use case for accessing existing third party C libraries. In this case I am both the user and the implementer of the C library.

In order to determine what is the right granularity for context switching between Python and C, I have done some benchmarking. I mainly want to measure the function call overhead. So the test functions are trivial function like returning the first character of a string. I compare a pure Python function versus C module function versus ctypes function. The tests are ran under Python 2.6 on Windows XP with Intel 2.33Ghz Core Duo.

First of all I want to compare the function to get the first character of a string. The most basic case is to reference it as the 0th element of a sequence without calling any function. The produce the fastest result at 0.0659 usec per loop.

  $ timeit "'abc'[0]"

  10000000 loops, best of 3: 0.0659 usec per loop

As soon as I build a function around it, the cost goes up substantially. Both pure Python and C extension method shows similar performance at around 0.5 usec. ctypes function takes about 2.5 times as long at 1.37 usec.

  $ timeit -s "f=lambda s: s[0]"  "f('abc')"

  1000000 loops, best of 3: 0.506 usec per loop

  $ timeit -s "import mylib" "mylib.py_first('abc')"

  1000000 loops, best of 3: 0.545 usec per loop

  $ timeit -s "import ctypes; dll = ctypes.CDLL('mylib.pyd')"
              "dll.first('abc')"

  1000000 loops, best of 3: 1.37 usec per loop

I repeated the test with a long string (1MB). There are not much difference in performance. So I can be quite confident that the parameter is passed by reference (of the internal buffer).

  $ timeit -s "f=lambda s: s[0]; lstr='abcde'*200000"
              "f(lstr)"

  1000000 loops, best of 3: 0.465 usec per loop

  $ timeit -s "import mylib; lstr='abcde'*200000"
              "mylib.py_first(lstr)"

  1000000 loops, best of 3: 0.539 usec per loop

  $ timeit -s "import ctypes; dll = ctypes.CDLL('mylib.pyd')"
           -s "lstr='abcde'*200000"
              "dll.first(lstr)"

  1000000 loops, best of 3: 1.4 usec per loop

Next I have make some attempts to speed up ctypes performance. A measurable improvement can be attained by eliminating the attribute look up for the function. Curiously this shows no improvement in the similar case for C extension.

  $ timeit -s "import ctypes; dll = ctypes.CDLL('mylib.pyd');
           -s "f=dll.first"
              "f('abcde')"

  1000000 loops, best of 3: 1.18 usec per loop

Secondary I have tried to specify the ctypes function prototype. This actually decrease the performance significantly.

  $ timeit -s "import ctypes; dll = ctypes.CDLL('mylib.pyd')"
           -s "f=dll.first"
           -s "f.argtypes=[ctypes.c_char_p]"
           -s "f.restype=ctypes.c_int"
              "f('abcde')"

  1000000 loops, best of 3: 1.57 usec per loop

Finally I have tested passing multiple parameters into the function. One of the parameter is passed by reference in order to return a value. Performance decrease as the number of parameter increase.

  $ timeit -s "charAt = lambda s, size, pos: s[pos]"
           -s "s='this is a test'"
              "charAt(s, len(s), 1)"

  1000000 loops, best of 3: 0.758 usec per loop

  $ timeit -s "import mylib; s='this is a test'"
              "mylib.py_charAt(s, len(s), 1)"

  1000000 loops, best of 3: 0.929 usec per loop

  $ timeit -s "import ctypes"
           -s "dll = ctypes.CDLL('mylib.pyd')"
           -s "s='this is a test'"
           -s "ch = ctypes.c_char()"
              "dll.charAt(s, len(s), 1, ctypes.byref(ch))"

  100000 loops, best of 3: 2.5 usec per loop

One style of coding that improve the performance somewhat is to build a C struct to hold all the parameters.

  $ timeit -s "from test_mylib import dll, charAt_param"
           -s "s='this is a test'"
           -s "obj = charAt_param(s=s, size=len(s), pos=3, ch='')"
              "dll.charAt_struct(obj)"

  1000000 loops, best of 3: 1.71 usec per loop

This may work because most of the fields in the charAt_param struct are invariant in the loop. Having them in the same struct object save them from getting rebuilt each time.

My overall observation is that ctypes function has an overhead that is 2 to 3 times to a similar C extension function. This may become a limiting factor if the function calls are fine grained. Using ctypes for performance enhancement is a lot more productive if the interface can be made to medium or coarse grained.

A snapshot of the source code used for testing is available for download. This is also useful if you want a boiler plate for building your own ctypes library.

2009.07.16 [] - comments

 

 

blog comments powered by Disqus

past articles »

 

BBC News

 

World powers agree Syria war pause plan (12 Feb 2016)

 

Japanese shares plunge over global woes (12 Feb 2016)

 

Spaniard pulls six-year sickie (12 Feb 2016)

 

Ground control bids farewell to Philae (12 Feb 2016)

 

Clinton and Sanders clash over Obama (12 Feb 2016)

 

Eurozone economy grows 1.5% in 2015 (12 Feb 2016)

 

Rwanda to relocate Burundi refugees (12 Feb 2016)

 

Barry Manilow rushed to hospital (12 Feb 2016)

 

'Paternity leave' MP quits over affair (12 Feb 2016)

 

UK challenges China over HK bookseller (12 Feb 2016)

more »

 

Slashdot News for nerds, stuff that matters

 

Senate Passes Bill Making Internet Tax Ban Permanent (2016-02-12T02:40:00+00:00)

 

Google Settles Decade-Long Tax Dispute In UK (2016-02-12T00:53:00+00:00)

 

Self-Propelling Microparticles Spot Ricin In Minutes (2016-02-11T23:59:00+00:00)

 

Kim Jong-Un Found To Be Mac User (2016-02-11T23:21:00+00:00)

 

Time Inc. Buys MySpace Parent Company Viant (2016-02-11T22:46:00+00:00)

 

Our Hidden Neanderthal DNA May Increase Risk of Allergies, Depression (2016-02-11T22:03:00+00:00)

 

Qualcomm Promises Gigabit LTE Speeds and New Chips to Power Smartwatches (2016-02-11T21:23:00+00:00)

 

Drivers Need To Forget Their GPS (2016-02-11T20:41:00+00:00)

more »

 

TechPsychic Tech Rumors and Invented News

more »

 

SF Gate

 

Bay Area News (7 Jan 2012)

 

City Insider (11 Feb 2012)

 

Crime Scene (13 Feb 2012)

 

C.W Newius Column (10 Jan 2012)

 

C.W. Nevius Blog (11 Feb 2012)

 

Education News (10 Jan 2012)

 

KALW (11 Feb 2012)

 

Matier and Ross Blog (11 Feb 2012)

 

Business News Roundup, Feb. 11 (11 Feb 2016)

 

Twitter sees decline in users (11 Feb 2016)

 

Tesla quarterly loss triples, but company predicts profit ahead (11 Feb 2016)

 

Facebook rejects Andreessen comments on India, and he apologizes (10 Feb 2016)

 

Daily Briefing, Feb. 11 (10 Feb 2016)

 

Twitter unveils new timeline feature ahead of earnings report (10 Feb 2016)

more »

 

Asia Times Online

more »

 


Site feed Updated: 2016-Feb-12 03:00