IRC log for #koha, 2006-08-24

← Previous day | Today | Next day → | Search | Index

All times shown according to UTC.

Time Nick Message
12:55 kados shedges: afternoon
12:55 shedges hey
12:55 kados shedges: btw: I managed to rebuild NPL's leader data with no loss
12:55 shedges cool!
12:56 been working on kohadocs index page, making lots of little changes to make it validate
12:56 kados sweet
12:56 shedges Arabic's a bitch!
13:06 thd kados: some getMARCurls should be in the default view if I remember correctly.
13:07 kados: I believe that 856 $u appears.
13:07 kados: which URLs were you expecting?
22:32 rychi hello koha people
22:32 chris hi ryan ... im just on my way out
22:33 quick walk in the sun, be back in 15 mins or so
22:33 rychi hi chris.  care to answer a question when you're back?
22:38 mason too late ryan, he's off :)
22:47 chris back
22:51 yep, fire away ryan
22:56 rychi The rel2_2 marc_subfield_structure editor should work, correct?
22:57 I am getting a wacky 'hidden' field ... it has some html in it, rather than a tinyint .
22:57 the change seems to be with this escapeHTML stuff.
23:01 chris umm as far as i know it should
23:01 i havent worked on it /looked at it lately
23:02 which templates?
23:02 dewey which templates are OK for dev_week ? npl ?
23:02 chris the npl ones?
23:02 dewey the npl ones are not.
23:02 chris dewey: forget the npl ones
23:02 dewey chris: I forgot npl ones
23:03 chris dewey: forget which templates
23:03 dewey chris: I forgot which templates
23:06 rychi i get the same behavior in default and npl.
23:06 can ayone with an updated rel_2_2 verify that /cgi-bin/koha/admin/marc_subfields_structure.pl looks normal?
23:07 chris not anyone here i dont think
01:33 qiqo hi
01:33 anybody home?
01:40 when will 2.2.6 be available
01:55 hi mohamedimran
01:55 mohamedimran hi
01:55 dewey salut, mohamedimran
01:56 qiqo cava?
01:56 j'ai un probleme avec koha.. huhu
01:57 les barcodes ne marchent pas
02:00 allo??
02:01 hdl hi qiqo
02:01 les barcodes ne marchent pas : que veux-tu dire par là ?
02:02 qiqo can we speak in english now?
02:02 hehe
02:02 mohamedimran ya
02:02 qiqo yes,, i am having some problems with barcod printing
02:03 when i create the pdfs.. the bardcode that i assigned when i catalogued a book seemed different
02:04 mohamedimran hi hdl
02:04 any update on my ldap query
02:07 qiqo like for example i assigned 00001 for a book,, when i printed into a pdf the codes, the code becomes 000000000017
02:07 how does that happen
02:09 hdl:  still there?
02:09 dewey there is probably a minor diff in <div>s, that I missed
02:10 btoumi hi all
02:10 hdl yes.
02:10 hi btoumi.
02:10 qiqo and another question how do i enable printing labels?
02:10 do i need to get the barcode module using cvs?
02:10 btoumi hi hdl
02:11 hdl qiqo: I don't think so.
02:11 qiqo hmm..
02:11 im using 2.2.5
02:12 hdl barcode.pl is a quite old module, which only works with PDF::API2 version 0.33r77
02:12 qiqo yes,, i have 0.33? not 0.3r77?
02:13 hdl And maybe there is a hack to get the good barcodes. I don't remember.
02:14 qiqo so basically, the barcode system wont work?
02:14 :(
02:16 what shall i do ...
02:22 aanybody who has other views on this matteR?
03:54 hdl btoumi ? tu connais qiqo ?
03:55 hi thd
03:55 btoumi hdl: non pourquoi?
03:56 hdl Je voulais un peu discuter de son problème de code barre.
03:56 Eventuellement lui envoyer un mail.
03:56 Mais comme je ne le connais pas...
03:56 chris there ?
03:57 chris do you know who qiqo is?
04:00 btoumi hdl: ah ok
04:00 hdl: chris est pas la il va revenir normallement
04:27 chris back
04:29 hi hdl, i think qiqo is from the philipines .. or i might be confusing them with someone else
04:31 hdl A shame I donot have Any email for him.
04:31 I would have sent him another barcode.pl
04:32 chris ahh i think he has posted to the list before .. ill look
06:20 kados hdl: you there?
06:38 thd: are you present?
06:39 thd yes
07:18 hello hdl
07:21 kados: I will have to ask Ed Summers for MARC::Charset->ignore_errors(2) which reports errors but does not lose the subfield.
07:22 kados thd: that's what it does already
07:23 thd: it will only report an error, and will remove the offending character
07:23 rather than the whole subfield
07:23 thd kados: really so it does not really ignore them completely
07:23 kados ok, I've gotta run ... talk later
07:32 thd kados: I know that you are gone now but that behaviour for ignore_errors(1) is not documented in the man page.
07:33 and I rebuild it every week at least
07:42 hdl kados: i am here
07:42 dewey i already had it that way, hdl.
07:43 hdl dewey: who am i
07:43 dewey you are here
07:43 hdl dewey: forget i
07:43 dewey hdl, I didn't have anything matching i
07:43 hdl dewey: forget i am
07:43 dewey hdl, I didn't have anything matching i am
07:46 hdl kados: read your mail on koha-devel.
07:47 Seems my error comes from the fact there is no leader created using MARChtml2xml.
07:47 I will try adding a leader with a as 8th character.
07:58 thd hdl: are you using UNIMARC records?
07:59 hdl I was just trying to add a simple record into my database. And miserably failed at it.
07:59 thd I mean for your current tests where MARC::Charset gives problems?
08:00 hdl UNIMARC or USMARC is not the problem.
08:00 thd hdl: kados has left for a meeting and will probably be out much of the day
08:01 hdl I tried adding a utf-8 data but since MARChtml2xml donot produce a valid xml MARC record (no leader), it fails.
08:02 thd hdl: yes a leader is very necessary
08:02 hdl I was aware of this but did not notice there was none.
08:03 thd hdl: when is MARChtml2xml invoked?
08:03 hdl in addbiblio
08:03 line 445
08:04 thd hdl: is it killing leaders in head?
08:05 hdl: it worked fine recently without killing leaders in the record editor for MARC 21 in rel_2_2
08:06 hdl It does not produce leaders in head.
08:06 So no need to kill it.
08:07 thd hdl: I had equated not producing with killing
08:09 hdl: I believe that every IO operation may require blessing the data as UTF-8 from earlier findings about how to use UTF-8 data correctly in Perl.
08:12 hdl thd: That is a HUGE work... and bugs can still be badly hidden, unless we use a good API or good modules that cope with it and use ONLY these modules in our code.
08:14 thd hdl: I believe that may have caused a display problem for using authorities to fill fields in the bibliographic record editor when the authority value contains UTF-8 double byte characters.
08:15 hdl I am just reporting things that are blocking for us. We cannot tell our clients ; It is utf-8 compliant provided that you use only non-Mysql utf-8 data.
08:15 thd hdl: that had given you uncomposed characters in Firefox even if they were the correct byte codes I believe.
08:16 hdl I am not speaking of ancient authorities display in firefox.
08:17 thd hdl: I know you wer not speaking of it now but that problem was never resolved was it?
08:17 hdl This problem I coped with and authorities are now clearly and simply integrated and displayed.
08:17 It is.
08:18 look at o6.hdlaurent.paulpoulain.com and search for Egypt in athroponymes
08:18 and you will see.
08:18 s/athroponymes/Anthroponymes/
08:19 thd hdl: what did you do to resolve that problem if not designate the string as UTF-8 before passing it on to the template or HTML?
08:20 hdl o6 is rel_2_2 version and data only comes from Mysql.
08:20 So what I had to do was setting Name=utf-8
08:20 to database connection.
08:21 And when getting data and displaying them, they are not "PERL" interpreted.
08:21 But with zebra, it is different.
08:21 since zebra records are processed in some ways before being displayed.
08:22 (PERL interpreted)
08:23 THAT mix PERL process and untainted PERL utf8 MYSQL data is giving problems.
08:23 I wonder how tumer coped with this.
08:23 thd hdl: I see so the problem is you cannot designate the encoding before Perl has mangled it from Zebra?
08:25 hdl thd: For pure data display. I found a workaround I exposed in my mail to koha-devel.
08:25 thd: Now, I try and add utf-8 data to zebra and fails.
08:26 I merely report things and try and find a solution.
08:26 missing correct leader seems to be the problem.
08:26 But I thought that koha-3.0 was stable.
08:26 thd hdl: kados had imagined earlier that somehow your data was not valid UTF-8 and that was the source of your problems
08:27 hdl hi slef.
08:27 slef we need an email-based bug tracker ;-)
08:27 hi hdl
08:27 slef: test
08:28 thd slef: are you not subscribed to the bugs list?
08:28 slef thd: does it let me manipulate bugs by email?
08:28 thd slef: you mean with commands in the message body?
08:29 or subject line?
08:29 slef thd: yes, or even just add comments to the bug report
08:29 hdl Had he read my mail to koha-devel, he would have seen that I was out of any base. But simply testing some basic features at atomic level.
08:31 thd slef: which would need an add comments subject line command
08:35 hdl: kados often does not have or take the time to read messages as carefully as he might
08:38 hdl: he uses mutt as a mail reader which is fine but makes concentrating on more than the briefest message very difficult without a better typography in a GUI to aid the reading.
08:40 hdl thd: we all do that sometimes. Especially when it bothers us ;) But sometimes, i would prefer that he took as much patience as we do when he reports bugs that he consider as blocking.
08:40 thd hdl: he also has not been sleeping enough to be alive now
08:40 hdl ok.
08:43 thd hdl: I tend to not report if I cannot report in sufficient detail but my idea of detail is at least two centuries behind the current culture
08:44 hdl: not reporting is also problematic
08:47 slef cvs commit: warning: file `misc/Install.pm' seems to still contain conflict indicators
08:47 oh crap
08:51 fixed
08:53 thd hdl: MARC::Charset is of little value to you if you have no MARC-8 data.
08:55 kados: However, if you did have kados reported a couple of hours ago behaviour for ignore_errors(1) is not documented in the man page.
08:55 hdl: However, if you did have kados reported a couple of hours ago behaviour for ignore_errors(1) is not documented in the man page.
08:56 hdl: he stated that ignore_errors(1) reports the error and deletes only the offending character
08:58 slef "Bugzilla has suffered an internal error."
08:58 yay
09:00 anyone else here got SIP(VoIP)?
09:03 hi owen
09:03 owen Hi slef, what's new?
09:04 slef I broke Install.pm and then fixed kohabug 1154
09:04 Got a referral from paul for a koha demo
09:05 Still wondering about sprinting on Makefile.PL and a web installer to try to get it into 2.2.6 instead of Install.pm, but I think 2.3.0 is a more realistic aim.
09:07 What's new with you?
09:08 owen that's quite a bit of new
09:08 thd hdl: I have reread your original UTF-8 koha-devel list message carefully and I see the key point which I had previously not grasped well enough from my own lack of sleep at the time.
09:08 owen I've been working with kados on a new design for the OPAC
09:08 hdl thd: In our addbiblio.pl, we still use MARC:File:XML and therefore MARC::Charset to input a new biblio.
09:08 thd: BUT.
09:09 thd: Since we are the ones that code addbiblio.
09:09 thd: AND we can control utf8 compliance of data provided.
09:09 thd hdl: what UTF-8 data do you contemplate adding from MySQL instead of merely Zebra alone?
09:10 hdl We may be up to add a good xml marc record on our own. (Long, but possible)
09:10 thd: to answer your question.
09:11 thd: I was looking at frameworks data display along with record data.
09:11 thd: This is another reason to go to XML frameworks.
09:12 thd: But this is another developement to go through.
09:12 thd: I can propose a dtd for frameworks.
09:13 thd: But I am waiting for some time to think it through and try some xsl transforms in order to make them handy both for input and output.
09:14 thd hdl: but if you use HTML entities in the frameworks then you should not have a problem for mutibyte characters for Latin language set frameworks at least.
09:16 hdl Sorry ?
09:17 thd hdl: hTML entities display fine for me in UTF-8 as long as the record editor does not need to edit them.  The record editor should only need to edit the contents of the fields and subfields not the labels
09:17 slef owen: javascript-free, I hope ;-)
09:18 thd hdl: I mean use &eacute; instead of ? in an SQL framework.  Of course XML frameworks may be better
09:19 owen slef: I subscribe to the philosophy of unobtrusive javascript when it comes to the OPAC
09:19 Javascript that enhances where possible, but doesn't exclude
09:19 thd owen: which JavaScript is unobtrusive?
09:19 owen: you answered as I asked :)
09:20 owen to me the Intranet is another matter. I think we can justify requiring librarians to have javascript enabled
09:21 thd owen: only if JavaScript is faster and better not just because you can
09:21 hdl thd: If framework data from Mysql is badly displayed then, any data from mysql will be. Do you follow ?
09:21 thd: then it is not simply a matter of escaping.
09:22 thd: librarians would never like to search for Benoît typing Beno&icirc;t.
09:22 thd hdl: if you are only concerned about framework labels why are HTML entities not a sufficient solution even if they are not an Ideal solution
09:23 hdl: I was only referring to labels not to record content
09:23 hdl So we 'french' but also other non-english languages would have to recode all the Mysql entries.
09:23 thd : labels are contained in mysql tables.
09:24 (at the moment)
09:24 thd hdl: were you using ISO-8859?
09:24 hdl No. I am trying to use utf-8.
09:24 and to get it right.
09:24 thd in SQL frameworks currently?
09:25 hdl Currently, in PURE Mysql, everything works just fine.
09:25 thd I do not mean for your tests but for production systems
09:25 hdl Since there is no perl control over the data.
09:25 But, as soon as you manipulate PERL data and display those data.
09:26 If PERL is not PERL aware, and manages UTF-8, display will be broken.
09:26 if PERL is not UTF8 aware sorry
09:27 thd hdl: why not use two separate scripts for capturing the data and then merge with a third script
09:28 hdl: actually only two scripts should be needed
09:28 hdl And we HAVE to manipulate PERL data through the XMLrecord for displayind marcrecords.
09:28 That is also a solution I tried.
09:29 thd hdl: the problem you report is that setting binmode for the whole script fixes encoding for one data source but breaks it for another
09:30 hdl: why can you not capture the data in separate scripts and merge to one standard method after Perl knows the encoding of the source data.
09:30 ?
09:31 hdl: what happened when you tried?
09:31 hdl But I consider it as inelegant since it supposed a manipulation utf8 data magically converted to latin1 by PERL and converted back to utf8.
09:32 thd: Thinking over, it would probably the most HARMLESS solution.
09:33 thd: it worked well.
09:33 (for display)
09:33 thd hdl: Although, If it requires conversion to Latin 1 it would not work for Chinese in MySQL.
09:34 hdl thd: the manipulation was on marcrecord data not on Mysql data.
09:35 thd hdl: you mean because your MARC record data started as Latin-1?
09:35 hdl: what if you were storing Chinese in your MARC record?
09:37 hdl In MARCdetail.pl, line 290, adding use Encoding; Encoding::from_to($value,"latin1","utf8");
09:37 thd: NO For JEE's sake.
09:38 thd hdl: what will the Chinese Koha users do?
09:38 hdl I mean. I am trying to get zebra working.
09:38 I have no slightest idea.
09:39 thd hdl: do you not want Koha to work for every language?
09:39 hdl The fact is that, getting zebra record as xml if you donot turn PERL utf-8 aware provides you magically with latin1 data.
09:39 thd: i explain.
09:39 thd hdl: including Klingon?
09:40 hdl let me explain to the end and read.
09:40 Do you understand the first fact ?
09:41 thd hdl: yes that Perl treats everything as Latin -1 unless told otherwise?
09:41 hdl yes.
09:42 So unless you make PERL utf-8 aware, you cannot treat xml records truly as utf-8.
09:43 Do you understand the point ?
09:43 thd yes
09:44 hdl OK. If PERL is utf-8 aware. Since DBI and CGI are not. data RISKS to be double encoded.
09:44 So we have those solutions :
09:45 1) keep PERL not utf-8 aware and REencode data from xml records to utf8, hoping there will be no data loss.
09:47 Or 2) Make PERL utf8 aware AND try and get DBI UTF8 aware for display and cope with CGI entries as such hoping they always be utf8.
09:48 thd : Have you understood ?
09:48 thd yes
09:50 hdl: I presume in case 2 that CGI will be no problem if Perl has not lost the encoding of the source data along the way.
09:51 slef is there an encodings wiki page?
09:51 thd slef: do you mean in the Koha wiki?
09:52 slef yep
09:52 thd slef: I think there is try searching for encoding in the wiki search box
09:54 hdl: Is case 3 Perl 6 fixes everything?
09:54 slef owen: for a possible example of needless javascript: are the intranet-main menus switched using javascript instead of css?
09:55 owen Not in the NPL templates
09:55 slef heh... time to bring default up-to-date
09:56 http://wiki.koha.org/doku.php?[…]ncodingscratchpad
09:58 thd slef: NPL templates outdated in the menu switching respect.  Or the JavaScript for that in default is newer than the previous design used by both.
09:59 hdl: are you still there?
09:59 hdl yes.
09:59 slef thd: oh. I was hoping that NPL used CSS :hover styles.
10:00 thd hdl: so there are only two cases currently
10:00 ?
10:00 hdl thd: seems yes.
10:01 owen thd: what do you mean about menu switching?
10:01 slef: what do you mean about :hover styles?
10:02 thd hdl: you were just now proposing to use case one which seems dangerous unless you know that you are only dealing with French and ASCII?
10:02 hdl thd: About Case 2: CGI can be a problem if user input data with a non-utf-8 locale and if UTF-8 pages are "posted" with the user locales.
10:03 thd : I was proposing this because :
10:03 1) it needs few changes to code.
10:03 thd owen: I mean the drop down submenus in default.  Actually, I do not know what created them but I presumed JavaScript.
10:04 owen Why do you think NPL templates outdated in the menu switching respect? Because they lack the drop-down menus?
10:04 thd hdl: think of the poor Chinese users.
10:05 hdl 2) It doesnot change ALL Koha Behaviour.
10:05 thd owen: yes, I do not like JavaScript generally but the submenus are actually newer not that there was anything wrong without them
10:06 owen: I was merely correcting slef about which templates were older in this case
10:07 owen thd: I believe slef said the default templates needed to be brought "up-to-date" because he's opposed to javascript-driven menus
10:07 I'm not crazy about drop-down menus whether they're CSS-based or JS-based.
10:07 thd hdl: how does case 2 change all Koha behaviour?
10:07 hdl thd : I want to think about chinese. But I have only 24hours a day. and testing takes time. Moreover when explaining three times the same thing, since people seems chilling as soon as we raise some true problems. ;)
10:08 thd owen: chilling?
10:08 hdl: chilling?
10:09 hdl (Yes when you have goose flesh :))
10:09 )
10:09 maybe sweating or swearing would have been better ?
10:09 Just kidding.
10:10 thd owen: I only like drop downs that stay down without the pointer until a selection is made
10:10 hdl: if it was easy it would not be as much fun
10:11 hdl thd: It changes Koha behaviour in so far as all variables will be converted UTF-8. I already realized that I couldnot tell PERL to use UTF-8 Input since CGI is not UTF-8 Aware.
10:12 And then PERL would have double encoded CGI Input.
10:13 But we then have to change any #!/usr/bin/perl to #!/usr/bin/perl -COE
10:13 thd hdl: so how does CGI ever display UTF-8 outside of Latin-1?
10:14 hdl It gently display anything you pass him.
10:14 thd owen: I dislike any features which require using the pointer instead of the keyboard
10:17 hdl: why had you "realized that I could not tell PERL to use UTF-8 Input since CGI is not UTF-8 aware", if "It gently display anything you pass him"?
10:18 hdl CGI is not utf-8 aware. So it doesnot mark utf-8 data as utf-8 to PERL. Then PERL reencodes utf-8 data to utf8²
10:19 thd hdl: so you need three scripts to merge from
10:21 hdl: no scratch that maybe
10:22 hdl: we need to force the browser to send UTF-8 to CGI or interpret what is sent and convert
10:25 hdl: tumer has no problem for this because Internet Explorer will transmit UTF-8 encoded data to a page expecting it even if the locale is not UTF-8 on the users machine and can never be under MS-Windows to my knowledge.
10:25 toins hdl: instead of changing any #!/usr/bin/perl to #!/ur/bin/perl -COE, you could use the environnement variable PERLOPT
10:26 thd hdl: Windows uses UTF-16 internally for multibyte locales
10:28 hdl: we could ask the user to perform an encoding calibration test by typing some specified characters with each connection but that would be tedious for the user
10:29 hdl: we could have all the clients using an unfree operating system and running an unfree web browser
10:32 hdl: we could make a guess about CGI submitted encodings from the bytes passed and the web browser ID.
10:34 hdl thd : We could use UTF8CGI API that certifies UTF8 data from outside are marked as UTF8 ;)
10:39 thd hdl: is that in CPAN?
10:39 hdl No.
10:50 thd hdl: do you have the module?  The author's UTF-8 A-Go Go is down
10:53 hdl Non. Et je ne trouve pas de trace.
10:53 No. And I cannot find it.
10:54 thd hdl: I like the UTF8CGI solution if it works.
10:54 hdl: http://216.239.51.104/search?q[…]l=us&ct=clnk&cd=1
10:55 hdl: but that does not get the module itself
10:58 hdl Since kados had seen it and printed about this months ago on encodingscrathpad, which I had looked only at the creation, maybe he has it.
11:00 thd hdl: do Normes de Catalogage AFNOR for French cataloguing never encode names using the original language scripts from which the names originated?
11:03 slef hang on a mo
11:04 doesn't the browser send the content as whatever charset it thought the form page was?
11:05 so koha displays utf8 => browser sends utf8
11:14 thd slef: not if the browser has a non-UTF 8 locale on a free OS
11:15 slef: the user sees UTF-8 from Koha in the web browser but may not be able to type UTF-8 easily from the keyboard
11:16 slef thd: ugh.  Got test results?  This sounds worth linking in to the encodings page, as it only mentions browser problems on output AFAICS
11:17 thd: surely typing UTF-8 is just a matter of typing characters using whatever keymap one has?
11:18 thd slef: except how the key maps function and display typed characters depends on the locale setting
11:19 slef: there are solutions for MS windows to create UTF text documents as you type
11:20 slef: I have found no similar solutions for the free OS users except changing the locale
11:22 slef: for this to work well applications need to be able to switch locales for their users
11:24 slef: this only seems to work well for MS Windows, maybe OS X, and free OS users (having changed their locale in advance for the Free OS)
11:25 slef thd: I thought locale was independent of xkb.
11:26 thd: so, users would need utf8 fonts and a keymap that can type the characters (most can with Compose AFAIK) and then firefox can display/send it.
11:26 thd slef it is but even if you had a keymap outputting the correct characters as you typed them it would look wrong on screen if your locale did not match
11:27 slef: is compose an application?
11:27 slef thd: why, if utf8 fonts are available and the application is displaying utf8?
11:27 thd: Compose is an XKeySym
11:27 thd: I think it might be called Multi_Key properly
11:27 thd: often it's on left Shift+AltGr
11:28 thd: so to type e-acute, it would be leftShift+AltGr, then ', then e
11:28 é
11:29 hahahahah
11:29 I just realised why some of my apps are displaying OK and some aren't
11:29 thd slef: the fonts only know what to display because of the application and Firefox does not inform them well when they are typing it reverts to locale settings for display of what is typed
11:30 slef X's locale is wrong, so any X fonts are a bit off... things like Firefox is fine, though
11:30 My X locale is fubar, but Firefox displays utf8 input
11:30 let me run a test before I fix my configuration
11:30 see what it does on a web form
11:31 thd slef: I use the US-international keymap which is much easier than compose
11:31 slef thd: what's its name?
11:31 thd us_int or something like that
11:32 slef: you may not find one
11:35 slef ok, here's the test I just did:
11:35 X locale is wrong (ISO-8859-1)
11:35 utf8 fonts are available
11:36 utf8 typing is available
11:36 Firefox has been configured to use Unicode fonts for Unicode
11:36 (erm, not utf8 fonts... iso-10646-1 fonts... my mistake)
11:37 I put up a UTF-8 html page with a form method="POST" on it
11:37 action is the Apache test-cgi script
11:37 which I added two lines to make it print the POST message body
11:37 I opened the http://localhost/envtest.html (the form) with Firefox
11:38 I typed moo then a c-circumflex into the text field
11:38 (c-circumflex doesn't exist in ISO-8859-1 IIRC)
11:38 I submitted the form
11:38 http://localhost/cgi-bin/test-cgi includes (amongst other lines):
11:38 thd slef: the only with disadvantage us international is that you have to type a space after some common keys like double and single quotes or hold down the alt key for an xterm
11:39 slef CONTENT_TYPE = application/x-www-form-urlencoded
11:39 CONTENT_LENGTH = 14
11:39 POST contents, if any:
11:39 test=moo04%89
11:39 argh, the IRC client bites back
11:39 that last line should be test=moo[PERCENT]C4[PERCENT]89
11:40 thd: so it remaps the dead keys onto the main ones?  I think I have dead keys on AltGr+stuff near the enter key
11:40 I think C4 89 is the correct utf-8 encoding of c-circumflex.
11:41 So, it looks to me like utf8 web form gets sent utf8 input by firefox, even if the system locale is fubar.
11:41 thd yes: the main keys become dead and might be a little different but is much faster once you get used to not tripping over the dead keys
11:42 slef: but how did it display as you typed?
11:44 slef thd: as moo then a c-circumflex.
11:45 thd slef: what is your OS?
11:45 slef thd: GNU/Linux (GoboLinux 012+Compiles)
11:46 If the display is not correct on a similar system, then probably the fonts are misconfigured either in Firefox or fontconfig.
11:46 thd slef: maybe GoboLinux has special magic absent from Debian
11:48 slef thd: I've had it working on Debian in the past, but Debian now has defoma and I've not checked how that works for this.  If someone reminds me at a quiet time, I'll build a test machine for it here.
11:48 thd: international fonts are a common thing for English-language developers to not get right first time, sadly. (GoboLinux's main developers are in Brazil IIRC)
11:48 thd slef: when is quiet for you?
11:48 slef thd: when I've not many contracts ;-)
11:49 thd: and no big security updates on debian or osCommerce
11:49 you can often spot a quiet time because I start fixing my unpaid web sites ;-)
11:50 right, speaking of which, I guess I'd better get on with osCommerce updates
11:50 thd slef: which are those?
11:50 slef I'll add a note of this discussion to the encodings page RSN
11:51 thd: www.ttllp.co.uk http://mjr.towers.org.uk/ http://owu.towers.org.uk/ http://www.gnustep.org/ probably some others
11:54 thd slef: so really the main group of users with a problem are not Debian, Red Hat, etc. users with the wrong locale but legacy MS Windows and Mac users who do not have up to date software unless there are also problems with OSX.
11:58 slef: I believe that a significant share of people who actually use the public libraries have a computer system that is a few years old and often may not have fonts installed for UTF-8.

← Previous day | Today | Next day → | Search | Index

koha1