IRC log for #koha, 2006-07-03

← Previous day | Today | Next day → | Search | Index

All times shown according to UTC.

Time Nick Message
12:56 thd kados: are you there?
13:21 kados thd: hi there
13:22 thd kados: i spent all night the night before last only because my system s too slow and not enough RAM confirming that nothing that I had done had introduced this double encoding problem
13:23 kados: before my test I considered it likely that I and not Koha was responsible
13:25 kados what was your conclusion?
13:27 thd kados: tumer's conclusion is that Zebra or Koha Zebra and not SQL Koha is the problem
13:27 kados: Are the Afognak records in Zebra for the server that I am using?
13:28 kados thd: no
13:28 thd: that's a rel_2_2 system
13:28 thd kados: but do you not have Zebra configured for storage there?
13:29 kados on that server, yes
13:29 but not for afognak
13:29 thd kados: well something strange is happening
13:29 kados thd: show me
13:30 thd I am finding the example that I showed tumer
13:31 kados: http://library.afognak.org/cgi[…]detail.pl?bib=154
13:31 kados ok
13:31 where is the problem there?
13:31 ahh, I see it now
13:31 Valerie
13:32 thd: have you tried manually editing that record?
13:33 thd kados: I never touched that record to my knowledge
13:33 kados thd: the encoding problem is in the record itself
13:33 thd: when you attempt to edit it, you'll see that
13:34 thd kados: That is the worrying thing but you can see the same record on my own system
13:34 one moment
13:34 kados but I would suspect that I ruined encoding on import
13:35 or in my pre-processing script
13:35 before I blamed Koha
13:37 thd kados: tumer said he had a record with the same author in his system with the same double encoding problem
13:38 kados: on my X-windows system the fonts do not display correctly for the result set but they do display correctly for the individual record
13:40 kados: If you had used the bulkmarkimport.pl script which I supplied and told you to use you we should have had identical import processing between my system and Afognak
13:40 s/mark/marc/
13:41 kados I think I used it
13:41 I can't remember now
13:41 it's possible my mysql for that install isn't set up to have utf8 tables
13:41 thd kados: if I remember our discussion at the time well you had used it
13:41 kados my new installs do, but that one may be a layover
13:43 thd kados: that only is an issue for indexing and not for the actual string of byte content stored
13:44 kados: the odd thing is that tumer has reported the same problem
13:46 kados: this affects all the accented characters that I had noticed on Afognak but I had presumed the font problem for displaying in UTF-8 was the factor
13:47 kados: if you have not seen this occurring elsewhere then we have to ask tumer for how he can repeat it reliably
13:48 kados I don't think it happens elsewhere
13:49 thd: http://wipoopac.liblime.com
13:49 thd: take wipo for example
13:50 thd: they have lots of accented chars that are just fine
13:50 thd kados: tumer did have a problem with the record editor not clearing $9 for adding or changing an authorised heading to a bibliographic record but you would have to ask him the details which he had been discussing with hdl to know the problem more clearly
13:50 \
13:56 kados: yes, I do not see anything worse than the font problem which cannot be seen on MS Windows according to tumer
13:57 kados: are the WIPO records being stored in Zebra?
13:57 kados no
13:58 but the tables are set to utf8
13:58 also, I believe when I did afognak, i did not fully understand encoding
13:58 and how to manipulate it in perl
13:58 but by the time I did wipo's current installation, I had perfected it (mostly :-))
13:59 thd kados: Yet I almost completely replaced the import script that you supplied with good code which I later committed
14:00 kados right
14:01 thd kados: maybe it was careless kittens on the keyboard or path name priority
14:02 kados: I do not think that MySQL would do this no matter what the MySQL encoding was set to it would only keep you from finding the records
14:03 kados: was MARC::File:XML doing double encoding at one time?
14:03 kados yes, because I had the wrong parser installed
14:04 thd kados: could that have been the problem for Afognak?
14:04 kados yes
14:04 thd and for tumer
14:06 kados: The problem was that MARC::File::XML had made some silly change and you had to update XML::SAX or what was it that needed updating?
14:06 kados well ...
14:07 it's complicated
14:07 best refer to my email on perl4lib
14:07 emails
14:07 http://www.mail-archive.com/pe[…]org/msg01006.html
14:07 all is explained there
14:13 thd kados: thanks, i will refer tumer to More Fun as the possible solution to his manifestation of the problem.
14:13 kados :-)
16:46 hehe
16:46 chris: check out the proof of concept faceted search results on zoomopac when you get a chance
16:47 chris ill go look now
16:48 ohh thats pretty cool
17:28 slef chris: hehe... you don't want Italy to win?
17:31 chris my wife does, she lived there for 6 months
17:31 I dont really mind who wins, as long as they play well
20:24 slef <link rel="alternate" type="application/rdf+xml" title="slef-reflection in RSS 1.0"><xsl:attribute name="href"><xsl:value-of select="/rdf:RDF/rss:channel/@rdf:about" /></xsl:attribute></link>
20:24 oops

← Previous day | Today | Next day → | Search | Index

koha1