IRC log for #koha, 2006-02-22

← Previous day | Today | Next day → | Search | Index

All times shown according to UTC.

Time Nick Message
11:00 paul to put all modifs between rel_2_2 and 225 into head
11:00 (or probably better : -j 224 -j 225
11:00 as there should not be anything interesting before 224
11:00 kados but I want to update HEAD using rel_2_2 ... is there a tag for HEAD?
11:01 paul nope
11:01 but if you are in head, -j 224 -j 225 will backport all modifs between 224 and 225 to your current branch (head)
11:02 iirc, because i'm not a cvs geek in fact.
11:02 kados hmmm
11:08 $ cvs update -j 224 -j rel_2_2 updatedatabase
11:08 cvs update: file updatedatabase exists, but has been added in revision rel_2_2
11:09 paul cvs checkout could be better.
11:11 kados only works for module names, not files
11:22 paul kados : don't forget to read & anwser to my mail "new features for borrowers"
11:23 kados paul: yep, I have added it to tonight's meeting agenda
11:23 paul (knowing that OP guys read the list, even if they never write)
11:23 kados I will respond when we talk about it
11:23 but my first thought is ...
11:23 paul ... suspens ...
11:23 kados there are probably many libraries that will need to adjust borrowers fields
11:24 so we need an extensible framework to handle this need
11:25 (same goes for 'statuses' and 'branches' too)
11:25 (and 'holdings' :-)
11:25 paul I think that what they suggest is almost enough : it get rids with hardcoded categories, but don't add too many complexity + mandatory fields will be systempref driven.
11:25 going further could make Koha more complex, maybee too complex.
11:25 kados you mean too complex to set up?
11:25 paul yep
11:31 kados paul: have you wondered about zebra authentication? it seems currently, anyone can connect via ZOOM to my zebra and make changes to the db, or am I wrong? :-)
11:31 paul you're not wrong.
11:32 it's because you've added anonymous: rw in your zebra.cfg
11:32 (I added it too)
11:32 of course, it will have to be changed ;-)
11:32 (but if your firewall is correctly set, incoming connections should fail anyway)
11:36 kados I see
11:40 paul: I just forwarded to you an email with ideas for handling holdings in 3.0
11:40 paul: I sent it to chris over the weekend
11:45 will we continue to use biblio framework in 3.0?
11:45 paul probably.
11:45 kados for acquisitions mainly?
11:45 paul I bet yes in fact : the new MARC editor will be really too much for many libraries.
11:46 I think we will have 3 marc editors : the MARC=OFF, the 2.2 MARC=ON, and the brand new complete-but-for-MARC-fans
11:46 kados how about the biblio,biblioitems,items tables?
11:47 they are unused currently in head right?
11:47 paul will stay as well
11:47 nope.
11:47 they are still used (although sightly, I agree)
11:47 and i am SURE it would be a bad idea to remove them completly.
11:47 kados why?
11:47 paul at least for developpers.
11:48 if we don't have easy-to-use SQL queries to check datas, things will be much much more complex.
11:48 kados but with zebra you can use nice cql queries :-)
11:48 paul for example : if you have only issues tables, how can you check (export) the list of books in 1 SQL query ?
11:48 yes, but :
11:49 1- we don't have a tool like phpmyadmin for CQL
11:49 2- even if we had, we still need to have something to merge biblio and other data.
11:49 for example :
11:50 in borrowerdetails, you want to show all books reserved/issued by someone.
11:50 quieriyng through CQL would be a waste of time.
11:50 kados well ... it depends
11:50 if you grab a list of recordids from issues table
11:51 then query through cql to zebra to get title/author, etc.
11:51 it should be quite fast
11:51 paul where you could to this in 1 SQL query only ? I don't believe even a second it would be faster !
11:52 kados it wouldn't be one SQL query
11:52 and I don't think we have speed problems for such a query
11:52 ie, one more millasecond wont' hurt
11:52 paul it already is 1 SQL if i don't mind ;-)
11:52 kados right ... so 1 SQL + 1 CQL
11:53 the advantage is that the template designer can decide what to display
11:53 right now we are limited to what is in biblio,biblioitems,items
11:53 and if we use old koha tables that eliminates the possibility to import other record formats
11:53 like dublin core or MODS
11:53 paul not really 1 CQL, 1CQL for each item !
11:54 kados yep, it's true
11:54 I agree it's slower, but I don't think it matters
11:54 as the flexibility is more important
11:54 IMO
11:56 just imagine how interesting it could be to return a list of records
11:56 and have a '+' sign next to them
11:56 you click on '+' and it shows you the whole record
11:56 without the need for a details screen
11:57 the idea being, we do a query, return all relevant data to the template designer, and let him decide what to display
11:57 also, imagine having two databases in Koha: one for dublin core (for full-text electronic items) and one for MARC (regular biblios)
11:58 if we discard the old koha tables
11:58 it opens the possibility to index any record format that zebra can handle
11:59 paul we already can index anything. We just have to map fields to biblio.*
11:59 (except, I agree, we can handle only 2 level marc like records)
12:00 (2 level records like marc is better ;-) )
12:00 kados but really, MARC21 is up to 8 levels
12:01 paul no, I mean tech 2 level : fields / subfields
12:01 kados I see
12:01 paul however, I agree we could/should get rid of additionalauthors, bibliosubject, bibliosubtitle.
12:02 that are useless now. we just need few basic informations on biblio/items.
12:02 but Even to remove them, I would ask chris, because it will really break MARC=OFF
12:02 and thus i'm not sure it would be acceptable for kiwis
12:02 kados paul: chris and I discussed this
12:02 MARC=OFF means to use an XML format that is modeled off the old Koha tables
12:03 which illustrates my point again
12:03 if we eliminate old Koha tables, we will be able to index any record format
12:04 IMO the old koha structure is the reason no large libraries (like georgia PINES) choose Koha
12:04 there is too much fudging to try to map between MARC and Koha tables
12:05 plus, zebra adds functionality that we currently can't do easliy
12:05 easily I mean
12:05 like maintaining the same ID for a record even while updating it
12:08 it seems they are not open yet
12:08 MST not EST
12:17 paul: also, unless I'm mistaken, Z39.50 is a stateful protocol ... so I believe we could have just one $Zconn for the whole ILS ... which would dramatically reduce query times
12:18 as connections don't time out IIRC
12:45 paul back from phone.
12:46 kados : to check how utf8 works for you :
12:46 * grap a cvs head copy
12:46 * just go to marc_subfield_structure.pl (or itemtypes.pl, or any admin table)
12:47 * edit, adding an utf8 char (copy paste one from an electronic utf8 table if you don't have one)
12:47 * save it
12:47 it should appear as a non utf8 string immediatly after saving.
12:47 (at least that's what happends to me)
12:48 just add Decode::Encode_utf8() to your variable (before saving & while reading) and things should be OK
12:48 ?
12:48 (oups, was trying to paste an unicode char)
12:48 kados (phone) ... brb
12:49 back
12:51 paul: you use vi editor?
12:54 |hdl| I do
12:54 (somtimes because of ssh)
12:55 kados I'm confused
12:55 |hdl| kados
12:55 kados i edit marc_subfield_structure.pl?
12:55 where do I add the utf-8 char?
12:55 |hdl| Yes Quite confusing the first times...
12:55 OOOPS ?
12:56 Under Linux, there is a .vimrc
12:56 But I never encountered charset pbs.
12:57 kados I still don't understand what I am supposed to do to test
12:57 |hdl| :set encoding=utf-8
12:57 in command.
12:57 kados ok ... that's it?
12:57 |hdl| Should.
12:58 But Maybe it is a dev module.
12:58 kados then save the file?
12:58 it didn't appear to do anything
12:58 |hdl| which version ?
12:58 vi -v
12:58 or -version
12:59 kados 6.3.84
13:01 |hdl| see :http://eyegene.ophthy.med.umich.edu/unicode/
13:02 or http://www.vim.org/htmldoc/mbyte.html
13:05 kados hmmm
13:05 I still don't understand what we are testing
13:06 are we testing whether filenames can be encoded in utf-8?
13:06 whether our scripts themselves?
13:06 or whether data in the database?
13:06 I thought it was just to test whether borrowers, branch names, etc. could be utf-8
13:08 |hdl|: can you clarify?
13:09 |hdl| Sorry.
13:09 I am working on acquisitions.
13:09 I think that script names are not the problem.
13:10 BUT since we want to be FULL UTF-8 compliant,
13:10 I think that we must ensure : data in database is UTF-8
13:10 AND perl generated HTML pages are UTF-8
13:11 Does that make sense ?
13:11 kados yep
13:11 so we need to modify msyql
13:11 tables
13:11 |hdl| The fact is that I saw Paul's problem with utf-8 but couldnot get much implied in tests.
13:12 kados and we need to tell perl that all output should be sent as utf-8
13:12 |hdl| (I am moving and doing much DIY)
13:14 kados http://perldoc.perl.org/utf8.html
13:14 that means script is written in utf-8
13:16 http://ahinea.com/en/tech/perl[…]ode-struggle.html
13:19 it seems we need to warn perl if we are dealing with utf-8 data
13:19 |hdl| not so easy indeed :/
13:28 kados : rrp stands for unit price ?
13:29 Retailer R. price wht does the r stands for .
13:29 And ecost ?
13:30 kados not sure :/
13:30 http://www.google.com/search?h[…]3Arrp&btnG=Search
13:30 |hdl|: abbreviation for recommended retail price.
13:32 |hdl| thanks.
13:33 paul, kados : why is freight counted for each item ?
13:33 $total=($parcelitems[$i]->{'unitprice'} + $parcelitems[$i]->{'freight'}) * $parcelitems[$i]->{'quantityreceived'};   #weird, are the freight fees counted by book? (pierre)
13:33 $parcelitems[$i]->{'unitprice'}+=0;
13:33
13:38 kados chris (when you get up) : And P&P in reveice page ?
13:38 s/reveice/receive/
13:38 kados I'm not sure ... sometimes each book will add weight
13:39 if chris added that code then he could explain it I think
13:39 |hdl| P&P on the net is Plans and Programs (military field)
13:48 kados hmm ... it seems I'm forbiden to set topic
13:49 paul kados :
13:49 just go into Koha, admin >> itemtypes >> modify
13:49 and here, enter an utf8 character as itemtype description (after the existing description)
13:50 then save, and you should see if your description is correctly handled in utf8 or not.
13:50 for me, it's not.
13:50 kados well ... it's my undeerstanding that if content-type is not utf-8
13:50 then when it is saved it will not be utf-8
13:50 paul I enter a greek letter, and after hitting "save", I see something like ÎA
13:50 mmm... did I missed something in the templates ??? (are you with PROG ?)
13:51 kados yes, PROG
13:52 but I believe I will need to tell the perl script that I intend to use utf-8, right?
13:52 paul mmm... your cvs is uptodate ?
13:52 no, I don't think so.
13:52 kados yes, cvs is up-to-date
13:52 it seems that 'add item type' is broken :/
13:53 paul (in fact : yes, you have to do this, or you'll get wrong results. but TÜmer told me he made nothing, and it works for him)
13:53 what happends for instance :
13:53 * you have an utf8 string in firefox.
13:54 * when you read the parameter, Perl see it as uft8 correctly (if I don't mind)
13:54 kados [Mon Feb 20 10:11:20 2006] [error] [client 70.104.108.241] can't opendir /home/koha/testing/koha/opac/htd​ocs/opac-tmpl/npl/value_builder: No such file or directory at /home/koha/testing/koha/intran​et/cgi-bin/admin/itemtypes.pl line 109., referer: http://kohatest.liblime.com/cg[…]dmin/itemtypes.pl
13:54 [Mon Feb 20 10:11:20 2006] [error] [client 70.104.108.241] Premature end of script headers: itemtypes.pl, referer: http://kohatest.liblime.com/cg[…]dmin/itemtypes.pl
13:54 even though I'm using PROG, it calls for npl templates :(
13:54 paul * then, Perl sends the string to DBD::mysql, and that's where is the problem : dbd::mySQL ignores the utf8 flag, or something like that. And it stores the value wrong.
13:55 (try something else : branch, framework, ...)
13:55 kados what is a french utf-8 word that I can test with?
13:55 paul which OS do you use ?
13:56 kados linux
13:56 OSX on my desktop
13:56 paul copy paste something from http://www.columbia.edu/kermit/utf8-t1.html
13:57 kados A
13:57 that?
13:58 U+0041
13:58 or that?
13:58 paul look here :
13:58 http://kohatest.liblime.com/cg[…]45&frameworkcode=
13:58 you should see a Î3/4 after title
13:58 where I entered a greek letter
13:58 ==> fortunately you have the same problem ;-)
13:59 no, copy/paste a letter that is between [] (1st column)
14:00 kados http://kohatest.liblime.com/cg[…]admin/branches.pl
14:01 paul is it what you entered ? (it's not "true utf8" if i don't mind)
14:01 tries with something that is NOT in ascii 255 range.
14:01 something like a greek letter or an arab one.
14:01 I show.
14:02 kados correct
14:02 paul search :
14:02 U+0629
14:02 kados it was three greek letters I copied pasted
14:02 paul I added it to your utf8 branch.
14:02 kados but ... I suspect that the problem is quite simple
14:02 paul now, it looks like :
14:02 ة‎
14:03 kados the branches.pl script and html on that page is not utf-8
14:03 so when we submit something, it is not encoded as utf-8
14:03 paul the html in firefox is utf8 at least
14:04 kados well ... it is in the <meta> tag
14:04 but that doesn't mean it is for sure
14:05 I will attempt to validate
14:05 ok .. you're right ... it's utf-8
14:06 paul I also tried to iconf -fiso -tutf8 branches.pl, it changes nothing
14:06 BUT :
14:06 add Encode::decode_utf8() to all variable read/saved to mySQL, and things will go correctly
14:07 iiuc :
14:07 Perl has an internal flag "utf8", to say "this variable contains utf8 data"
14:07 kados can't we set that in context.pm?
14:07 paul with many tools to magically find that a var is utf8. but dbd::mysql driver don't set or use them.
14:08 thus, all variables coming from mysql are "utf8 = NO"
14:08 the decode_utf8 says "yes, it is, force"
14:08 your question : no, because it's not DBH handled, but for EACH sql reading
14:08 for example :
14:09 my ($x $y) = $sth->fetchrow
14:09 you must Encode::Decode $x AND $y
14:09 give it a try in branches.pl, you'll see it works !
14:09 kados for writing and reading or just for reading?
14:09 paul but what is VERY strange, is that for tumer it seems to work without this decode_utf8
14:10 (both i'm afraid, but I did not tried with only 1 (read). It may work)
14:31 time to leave. 6:30 PM
14:31 see you in 2:30 hours, unless something goes wrong
14:41 thd kados: all accented letters are two bytes in UTF-8 so much of French is different in UTF-8.
14:42 paul just before leaving : someone suggests me to use
14:42 http://search.cpan.org/~oyama/[…]P-0.04/mysqlPP.pm
14:42 thd |hdl|: have you left?
14:42 |hdl| not yet
14:43 thd |hdl|: What was the problem with leader management, needing a code change?
14:44 |hdl| s/remind/recall/
14:45 Perhaps you could search in bugzilla ?
14:53 thd |hdl|: I only find old bugs relating the need to add leader support, not the need to correct leader management.
14:55 |hdl| did you dig into koha-devel ?
14:56 thd |hdl|: Should there be a message there about the problem?
14:57 |hdl| I saw som messages related to leader management. But is this THE problem you think about... I can't guess.
15:00 thd |hdl|: You committed the code to avoid leader management.
15:01 |hdl| Ah ! that one.
15:01 Yes.
15:02 thd |hdl|: Yet you commit so much it is not easy for you to remember every reason.
15:02 |hdl| It was sthg quite confusing when using MARCgetitem.
15:03 And it made me forget about this.
15:03 thd |hdl|: What was confusing?
15:04 |hdl| When getting marc record items, there would be a field with no subfield.
15:05 And every item marc field was supposed to have subfields.
15:06 thd |hdl|: Does Koha not treat control fields as if they had subfield '@'?
15:06 |hdl| It does.
15:07 But remember 000 field was added after code was written.
15:08 thd |hdl|: I understand, but 000 is also not the only control field.
15:09 |hdl| ok.
15:09 Time to leave.
15:09 See you.
15:09 thd hd|_away: What is the consequence of your change for a user attempting to use control fields?
15:11 hd|_away: I will ask you tomorrow, if you have no more seconds.
15:13 hdl_away thd: Normally MARCadditem should not add control fields, afaik, since it is an item and not a biblio.
15:14 But we ca discuss it tomorrow.
15:14 thd good evening hd|_away
15:15 hd|_away: You remember the development meeting later today
15:31 hdl_away my activity was focused on acquisitions for 2.2.5
15:31 anyway.
15:39 thd hd|_away: I will answer your shipping question tomorrow
15:42 kados: The wiki website is down.  Would you email the meeting agenda to me?
15:43 kados !!
15:44 I'll just post it here:
15:44 PERL_ZOOM UPDATE
15:44 list of tasks remaining before perl-zoom is production ready
15:45 BORROWERS SUGGESTIONS FROM SAN
15:45 brb
16:14 Koha meeting coming up
16:27 thd kados: Was that the complete agenda?
16:28 kados: What are the SAN borrowers suggestions?
16:33 chris morning
16:34 thd good morning chris
16:37 kados morn chris
16:37 looks like our wiki's still down
16:37 is that being maintaine by Steve Tonnosen?
16:37 maintained even?
16:37 chris nope
16:37 roger buck, in sydney australia
16:37 kados ahh
16:45 thd kados: Are you writing a link for the SAN suggestions?
16:45 kados yep
16:45 should be ther enow
16:49 http://wiki.liblime.com/doku.php?id=meetingagenda
16:49 thd kados: I had noted this before but did not remember the detail to apply to your reference.
16:49 kados T-MINUS 15 MINUTES TO KOHA 3.0 MEETING
16:51 MEETING AGENDA:http://wiki.liblime.com/doku.php?id=meetingagenda
16:56 thd kados: Steven F.Baljkas was concerned about how the OPAC MARC view displays records where 650 is repeatable for example but all repeated field names appear only once with each of their respective contents appended.   Therefore, in the OPAC MARC view 650 $a 650 $a $x 650 $a $x appears as 650 $a $a $x $a $x which looks as if everything had been dumped into one field and not repeated fields.
16:59 kados T-MINUS 3 MINUTES TO KOHA 3.0 MEETING
16:59 MEETING AGENDA: http://wiki.liblime.com/doku.php?id=meetingagenda
16:59 thd kados: This particular aspect is a problem for MARCdetail.pl and opac-MARCdetail.tmpl .
17:00 kados thd: if I understand correctly, this will go away in 3.0
17:01 thd kados: yet, it does look very bad for the months before 3.0 is stable even with a complete and valid bibliographic framework.
17:02 chris its just a dispaly problem thd?
17:03 kados still waiting for paul
17:03 thd chris: yes, that particular aspect of the MARC problem is merely about appearance.
17:03 chris kados: Yet, when appearance is wrong the presumption is that what is underlying is also wrong.
17:04 chris exactly thd
17:04 we should be able to fix that easily enough
17:04 kados welcome paul
17:04 paul hello world
17:04 kados ok we have quorum
17:05 MEETING AGENDA:http://wiki.liblime.com/doku.php?id=meetingagenda
17:05 russ hi everyone
17:05 kados quick roll call
17:05 I'm here
17:05 paul hdl phoned me 2 hours ago, he will not be here tonight
17:05 kados ok
17:05 thd thd: still chasing moving bugs that I already squashed previously.
17:05 chris im here for an hour (maybe more) i have a roofing guy coming to look at our roof
17:05 thd :)
17:05 kados chris: ok
17:06 paul good morning chris.
17:06 kados so ... first item on the agenda:
17:06 chris but trademen turn up whenever they feel like it :)
17:06 kados Perl-zoom Update
17:06 chris: :-)
17:06 on the wiki I have listed 'list of tasks remaining' and 'plugin idea for 2.2.6'
17:06 perhaps those should be reversed
17:07 the plugin idea was this:
17:07 as has been started already, make perl-zoom a drop-in replacement for marc* tables in 2.2.5
17:08 so replace Biblio.pm and SearchMarc.pm, run your zebraupdate script, etc. and you're able to run zebra with 2.2.6
17:08 paul wow !!! great goal, but really foolish i'm afraid
17:08 because Biblio.pm is NOT ready at all
17:08 thd kados: does that not require too much debugging for 2.26?
17:09 paul at no cost I would put any of my customers to zebra for instance !
17:09 kados me either yet
17:09 paul nice to read you rach
17:09 kados hi rach
17:09 MEETING AGENDA:http://wiki.liblime.com/doku.php?id=meetingagenda
17:09 chris yep thats the plan
17:09 richard hi
17:09 paul followed by richerd.
17:10 chris paul: i successfully acquisitioned a book using Biblio.pm
17:10 paul yes, that partially works. But many many things untested yet.
17:10 chris thats right
17:10 paul believe me, it's far from production-ready
17:11 deletion => nothing done
17:11 chris yep
17:11 paul for example.
17:11 chris i agree we cant do it tomorrow :)
17:11 paul item add/modify on an existing biblio => un tested yet
17:11 ...
17:11 chris but i dont think its impossible
17:12 paul my goal, as RM for 2.2.x branch, is to be as stable as possible.
17:12 kados of course ... and I'm not suggesting that 2.2.6 include zebra
17:12 paul I'm strongly against a public release that include zebra in 2.2 branch.
17:12 kados I'm simply suggesting that it be an option
17:12 paul ah, ok. I was misunderstanding
17:13 kados ie, _if_ you want to use perl-zoom it's possible to use with 2.2.6 before 3.0 is ready in some months
17:13 chris so you install 2.2.6 for example
17:13 paul if you want to have a 2.2.6, + some explanations to add zebra features, then, it may be a good idea
17:13 kados yep, that's the plan
17:13 chris *nod* that was what we were thinking
17:13 kados so the question is:
17:13 paul that would be useful to hunt bugs in zebra ;-)
17:13 chris maybe even a zebra-installer.pl that you can download
17:13 kados what do we need to do wit perl-zoom before it can be used in 2.2.6?
17:14 ie, what's left/untested?
17:14 chris deletion
17:14 kados right, got that
17:14 item adds/deletions
17:14 chris i havent tried deleting a record from zebra
17:14 modify works
17:14 add works
17:14 search works
17:14 kados there's a new routine in Biblio.pm
17:14 that I committed
17:14 paul collection.abs is fas from complete for UNIMARC.
17:15 kados z3950_extended_services
17:15 thd kados: Do you mean primarily for reading bibliographic record data but not altering the data there?
17:15 chris yes and its far from complete for marc21 too
17:15 paul (same thing for marc21 i think)
17:15 kados it should be able to handle any extended services action
17:15 chris thd might be able to help with the .abs files
17:15 kados actually, I think the .abs is ready
17:15 for marc21
17:15 chris the collection.abs?
17:15 paul kados : I bet it isn't.
17:15 kados yep
17:16 might not be, but I think it should be good for most cases
17:16 paul as you need to explain, for example, that title contains title+ subtitle+ uniform titles...
17:16 chris ahh i think he has done that paul
17:16 kados yep
17:16 it's quite complete :-)
17:17 there may be some gaps tho
17:17 thd kados: Did you use the MODS mappings?
17:17 kados thd: no, the marc21.abs that Sebastian put together with the LOC consultant was the guide
17:17 paul it's complete, but only for basic fields.
17:17 kados thd++
17:18 paul I mean title/author/subject
17:18 no ISBN for example
17:18 chris paul, i think if we allow ppl to get it going .. they might be able to help finish it, im thinking of people like steven balkjas etc
17:18 kados yep
17:18 paul I plan to do the same for UNIMARC, but I'll have to explain collection.abs syntax...
17:19 chris right
17:19 kados so this brings up another question
17:19 when do we create the $Zconn object?
17:19 Z3950 is stateful
17:19 if I'm understanding correctly
17:19 chris i think what we need
17:19 paul we should/could do something like C4::Context->dbh
17:19 chris is to look at C4::Context
17:19 heh, great minds think alike paul :-)
17:19 kados :-)
17:20 but if I"m understanding
17:20 correctly ... the $Zconn is even more stateful than dbh
17:20 chris that checks if a connection exists, open one if it doesnt
17:20 thd chris: Allowing the customer to finish it has not produced a complete and accurate MARC bibliographic framework that I am writing now for MARC 21 yet.
17:20 kados I dont' think there is even a timeout
17:20 chris yep
17:20 but there are lots of reasons you could lose a connection
17:20 kados true ...
17:21 chris its always safest to check, and only create if needed
17:21 kados ok ... so our list so far:
17:21 Context->Zconn
17:21 deletions
17:21 item adds/modifies/deletes
17:21 anything else?
17:21 paul (note that this MAY work. but by chance, it's untested yet)
17:21 .abs improvement
17:21 kados ok
17:22 how about searching and retrieving results ... where are we at with that?
17:22 chris http://opac.koha3.katipo.co.nz[…]-test.pl?cql=joke
17:22 paul SearchMarc is poorly tested from my point of view.
17:22 chris lets you test your .abs
17:22 paul I haven't checked that results were accurate.
17:22 chris (if you know what data there is)
17:22 kados right
17:22 paul I just checked that I got results ;-)
17:22 chris paul: im working on that now
17:22 so far so good, (SearchMarc) that is
17:22 kados I'm assuming that we also need to handle safe updates with Zebra right?
17:23 thd kados: what data is in your test?
17:23 chris safe updates?
17:23 kados zebra can do updates safely or unsafely :-)
17:23 with safe updates it doesn't commit the changes until you're done and it didn't crash
17:23 it's something we need to setup in zebra.cfg and also incorporate into our update routine
17:24 chris right
17:24 kados any time we call extended services that is
17:24 paul what does zebra means by "you're done" ?
17:24 connection closed ?
17:24 kados I'm not 100% certain ...
17:24 paul a "commit" ?
17:24 chris i think a commit
17:24 kados yes, commit
17:24 chris and you use shadow dbs
17:25 paul thus, when do we commit ?
17:25 chris so you make changes to a shadow db, when we are done, we commit
17:25 paul iiuc, shadow is not to cache updates, but to be sure a search made while updating is safe
17:25 kados yep
17:26 paul iiuc : search are done on DBA, updates are done on DBB, when finished => search on DBB, update DBA
17:26 kados sub z3950_extended_services should be able to handle a 'commit' if handed that operation
17:27 paul ok, but when do we decide to handle a "commit" ?
17:27 can't we ask zebra to auto-commit every 5 seconds, or something like that ?
17:27 kados I think it depends on the operation
17:27 if you're bulk-importing records, maybe once every 1000 records or when you're done?
17:28 if you're just updating a single record ... immediately
17:28 chris i think what we want is
17:28 eval { do the update };
17:28 paul OK, sounds good to me
17:28 chris if ($@){
17:28 some error
17:28 } else {
17:28 commit
17:28 }
17:28 kados hmmm
17:29 chris maybe
17:29 kados I'm not sure ... you might be right
17:29 but I thought the commit action was a separate action altogether
17:29 it itself is a 'service type'
17:29 chris yep
17:30 kados well ... we can try some things out and ask ID if we need to
17:30 chris i think you are right
17:30 kados anything else before perl-zoom will be ready?
17:30 paul not that I think atm
17:30 chris marc-detail.pl
17:30 kados right
17:31 chris itd be nice to be able to pull that right from zebra
17:31 kados yep
17:31 paul (and isbd-detail.pl)
17:31 chris yes
17:31 kados yep
17:31 paul should require at least 10mn developpment ;-)
17:31 kados heh
17:31 chris :-)
17:31 thd isbd-detail.pl requires the most changes
17:31 paul why thd ?
17:31 kados so who's got what? guess we should have been keeping track all along
17:32 paul I take care of biblio.pm, as usual.
17:32 chris thanks paul
17:32 thd isbd-detail user settings are backwards
17:32 kados thanks paul
17:32 chris i will continue on with search
17:32 paul that includes MARCgetbiblio, that requires at least 10mn hacking ;-)
17:32 kados paul: you might want to use my new routine ... or if not tell me why :-)
17:32 paul I also will test modif/deletion...
17:32 chris http://opac.koha3.katipo.co.nz[…]-detail.pl?bib=10
17:33 paul I'll for sure joshua. They seem quite good.
17:33 chris and ill get the marc view and isbd view going
17:33 paul: we have no prog templates for the opac ;(
17:33 thats why you might have seen a commit to the npl ones on the opac side (for my search-test)
17:33 paul ah, you're right. I can't take care of this. I have enough to do with Biblio.pm + Ouest-Provence+ many other things.
17:34 kados paul: can't take care of what?
17:34 paul (prog opac)
17:34 kados paul: prog templates?
17:34 I'll ask owen
17:34 chris maybe kados can bribe owen with subway sandwiches :)
17:34 kados hehe
17:34 ok, thd and I will work on collection.abs for unimarc and usmarc
17:35 russ kados if owen doesnt have time let me know
17:35 kados russ: will do
17:35 chris fabulous
17:35 kados ok ... shall we move along then
17:35 UTF-8 problems
17:35 chris i was thinking about this
17:35 paul no mail here. no news from tumer since 3 hours ?
17:35 chris if we get no joy from the maintainer (or the maintainers boss)
17:35 kados none yet :(
17:36 chris i wonder, should we patch dbd::mysql ourselves
17:36 kados right
17:36 paul that would really be a problem for a public release.
17:36 kados include it in C4
17:36 paul right, that's what I wanted to add
17:36 kados might not be ... if it's in C4
17:37 since it will be statically linked
17:37 chris and submit the patch .. and if we still get no joy
17:37 paul if we do this, we should NOT request libraries to patch the package themselves
17:37 kados (not sure if that's the right term)
17:37 chris then dbd::mysql::utf8
17:37 kados right
17:37 paul right
17:37 kados ok great ... so we all agree
17:37 shall we move on?
17:38 chris k
17:38 kados Koha Tables in 3.0
17:38 paul next question will be harder to have a common agrrement ;)
17:38 chris heh
17:38 kados paul: :-)
17:38 thd: good point, this is related to 3.0 holdings suggestions
17:38 which is the next item on the agenda
17:39 so before we discuss it, any questions about my holdings suggestions?
17:39 http://wiki.liblime.com/doku.p[…]oldingssuggestion
17:40 the basic idea is, we need a more flexible framework to support multiple tiers of holdings like in standard MARC HOldings
17:40 where the hierarchy can be 8 levels deep at least
17:40 chris or as little as 2
17:40 kados right
17:40 paul I think the DB scheme is correct.
17:41 what I don't see for instance is how to handle this in Koha & MARC
17:41 kados so if we create such a framework, we can eliminate all bibliographic data from sql
17:41 thd it should have arbitrary depth
17:41 kados all we will need is holdings data
17:41 in mysql
17:41 and bib data in zebra
17:41 so you have a single record ID
17:41 that is a record-level ID
17:42 then, in holdings forest table, you can have arbitrarily deep holdings represented
17:42 chris i think what paul is saying is that conceptually its correct, what the hard part is going to be is .. how to we build things like Biblio.pm to handle this
17:42 kados agreed
17:43 paul chris : ++
17:43 kados I think the MARC frameworks might be a good place to start modeling it
17:43 ie, build a framework for an 8-level serial record
17:43 chris so we would have frameworks, that describe how the structure works?
17:44 thd kados: why would you want a model based on the evil record format?
17:44 kados in the specific case, yes
17:44 or maybe not
17:44 thd kados: why?
17:44 kados I haven't fully thought this through
17:44 but I do know one thing
17:44 thd kados: You have the smarties format
17:44 kados many many libraries will not adopt koha until it can handle this kind of holdings data
17:45 at the very least, we need an underlying framework that can handle it
17:45 chris i think what we need to do
17:45 kados even if the default framework is still simple
17:45 chris is think some more, and write some pseudo code
17:45 kados yep
17:45 ok ... moving on then
17:45 SAN's borrower's suggestions
17:45 chris a high level prototype of how it might work
17:45 thd kados: Oh so you mean a framework for the MARC support part/
17:45 ?
17:46 kados thd: yea, that's what I meant
17:46 chris borrower suggestions?
17:46 kados http://lists.gnu.org/archive/h[…]-02/msg00053.html
17:46 new features for borrowers as paul described
17:46 paul kados : SAN does not exist anymore. It's now only "OUEST PROVENCE"
17:46 chris ahh
17:46 kados paul: ok ... sorry ... QP then :-)
17:47 paul OP, not QP
17:47 kados ok :-)
17:47 paul OUEST = WEST
17:47 kados right
17:47 chris you are planning this for 3.0 only paul?
17:47 paul yep
17:47 kados ok ... I think the proposed changes are definitely an improvement
17:47 but I worry that they are still not quite extensible enough
17:47 thd paul: What is the reason for an 'O'?
17:48 paul O ???
17:48 thd paul: OP
17:48 kados for one thing, I think any hard-coded category_types should be removed
17:48 thd: Ouest Provence
17:48 paul OP = Ouest Provence, the name of the library consortium
17:48 kados thd: = OP
17:48 chris its french thd :)
17:48 paul that is in west from Provence county
17:49 (not really a county. a "Région" in France. larger than a county)
17:49 chris i think its a good idea
17:49 paul in france, we have : communes - département - région - pays
17:50 thd kados: go on I will find out why 'O' and not 'Q' later.
17:50 kados thd: ok :-)
17:51 so chris and I discussed creating a 'business logic framework'
17:51 paul kados : I agree that it's a shame to have something hardcoded.
17:51 but by what could we replace it ?
17:51 kados where you could link a given action with a result
17:51 based on certain criteria
17:51 thd kados: Hard coded category types that are not set up in advance is certainly doubly problematic in Koha 2
17:52 kados it could be replaced by a hierarchy
17:52 paul as usual kados : in theory, it's better. but do we have someone to code this !
17:52 kados heh
17:52 maybe not in time for 3.0
17:53 thd I like the extensible reusable shiny forest that can be applied to many problems
17:53 chris flexibility, ease of use, speed
17:53 pick 2
17:53 :)
17:53 kados heh
17:54 chris its always a juggling act .. and i think steady progress might be the way to go
17:54 if we can do a forest for holdings first
17:54 thd chris: ease of use is just a user interface issue
17:54 kados right ... so we'll leave branch hierarchies and patron hierarchies out of 3.0 then ... have to draw the line somewhere
17:54 paul thd : what can look easy for a user can be a real pain for a developper !
17:54 kados in that case, I'd say OP's ideas are fine
17:55 chris and that sentiment is how you end up with things as horribly ugly as a lot of the ILS's out there :)
17:55 kados heh
17:55 chris yep, i think one hard thing at a time :-)
17:55 kados right
17:55 ok ... anything else to discuss?
17:55 chris i like OP's ideas i think its a big improvement
17:55 paul I think the real main improvement is :
17:55 chris russ had something
17:55 paul borrowers table cleaning !
17:55 thd paul: yes, ease of use for user in a flexible design is very much work for the programmer.
17:55 kados paul: right!
17:56 paul because for instance, this table is really an "inventaire à la prévert"
17:56 russ oh i just wanted to let paul know that we have got the go ahead for our serials module
17:56 so we are ploughing into that this week
17:56 kados russ: congrats! wohoo!
17:56 paul great.
17:56 thd paul: but if it is flexible enough you have the opportunity to reuse the code in more places.
17:56 paul you should reach hdl also and explain what you'll do to us.
17:56 russ yep - shame he couldnt be here today
17:57 paul iirc, he already has commited some code to have serial items created on the fly
17:57 it's in 2.2 cvs
17:57 chris excellent
17:57 russ great
17:58 paul http://cvs.savannah.nongnu.org[…]=koha&view=markup
17:58 yes, it's commited
17:58 russ perhaps we can make a time a little later to show you guys
17:58 cool
17:58 paul &serialsitemize
17:59 under documented i'm afraid.
17:59 you can bug him
17:59 russ sweet - i'll drop you a line in a day or two once we have some stuff to "show and tell"
17:59 paul (he's my employee, so I let you kick him if you want ;-) )
17:59 chris hehe
17:59 kados heh
18:00 russ i have pretty big feet - so i don't think that should be encouraged :-)
18:00 kados anything else to discuss for our meeting?
18:00 paul a last note
18:00 kados sure
18:00 paul I plan to work on late issues this week
18:00 kados late issues?
18:00 for serials?
18:01 right ... someone needs to bug roger
18:01 paul defining 3 levels of late issues warnings to borrowers.
18:01 thd paul: where is changeable subfiled order for the record editor for 2.26?
18:01 paul ???
18:02 chris ahh overdues paul?
18:02 paul right
18:02 kados paul: keep in mind that some of us use fines as well
18:02 chris cool that sounds good
18:02 paul for each branch/borrowercategory, you can define on 3 levels of "letters", depending on how late the books are.
18:02 kados very cool
18:02 paul plus a flag to debarr the borrower
18:02 kados nice
18:03 chris right, if we can get it to put fines too
18:03 that would rule
18:03 kados yea, that kind of flexibility is something some of my clients have asked about
18:03 chris at hlt we have a field called preferred contact
18:03 paul me too. And It's not funded, but i have some time, so I'll do
18:03 kados paul++
18:03 chris which is used when sending out overdue notices
18:04 if its email, koha will email the person the notice
18:04 kados right ... npl has a similar function
18:04 chris would that be able to be added to yours paul?
18:04 paul it's a field in borrowers table isn't it ?
18:04 chris yes
18:04 paul it is already in the table isn't it ?
18:04 kados yep
18:04 chris yep
18:04 just need the code to use it
18:04 paul I'm not sure I already use it, but i'll take care of it.
18:05 chris cool
18:05 kados thx paul
18:05 any other news?
18:05 paul the idea being to use : preferred email 1st, then any other mail available, am I right ?
18:05 chris thats right
18:05 kados I'll get the minutes out later today, read you all next week ... /me has a cold and must take a short nap
18:05 paul ok, have a good day.
18:06 kados ciao all
18:06 russ bye
18:06 paul i'll try to have a good night ;-)
18:06 chris cya kados
18:06 night paul
18:06 thd paul: are you still here?
18:06 paul microsoft ad on TV ;-)
18:07 thd paul: where do I modify code for the following change?
18:07 paul: the OPAC MARC view displays records where 650 is repeatable for example but all repeated field names appear only once with each of their respective contents appended.   Therefore, in the OPAC MARC view 650 $a 650 $a $x 650 $a $x appears as 650 $a $a $x $a $x which looks as if everything had been dumped into one field and not repeated fields.
18:10 paul: Can the change be done in MARCdetail.pl only without touching opac-MARCdetail.tmpl  ?
18:12 paul: maybe you are now paul_away.
18:12 paul (no, speaking with russ)
18:15 thd : i'm afraid you can't
18:15 thd paul: Is it really not correctable?
18:15 paul it wasn't the behaviour some versions ago.
18:16 fields were repeated. some libnraries thought it was too much info
18:16 and I decided to clean the screen.
18:16 but I agree it's not a perfect solution
18:16 we just have a larger space between fields
18:19 thd paul: I understand that it is not the current behaviour yet how could I change for libraries that think it is showing a defect.
18:19 paul: I am trying to change this now.
18:20 paul MARCdetail.pl
18:20 line 174-177
18:20 comment those lines
18:21 mmm... not sure, that may be for subfields
18:22 thd paul Are those lines for rel_2_2 or HEAD?
18:22 paul 2.2
18:22 if ($#subfields_data==0) {
18:22 $subfields_data[0]->{marc_lib}='';
18:22 $subfields_data[0]->{marc_subfield}='';
18:22 }
18:22 line 181
18:22 $tag_data{tag}="";
18:23 comment it as well as the if { } else {}
18:23 (just let :
18:23 if (C4::Context->preference('hide_marc')) {
18:23 $tag_data{tag}=$tagslib->{$f​ields[$x_i]->tag()}->{lib};
18:23 } else {
18:23 $tag_data{tag}=$fields[$x_i]->tag().' -'. $tagslib->{$fields[$x_i]->tag()}->{lib};
18:23 }
18:23 that should work
18:23 (I agree it's highly undedocumented)
18:28 thd paul: that code could certainly use a little comment :)
18:34 paul ok, bye bye everybody, goin to bed
18:35 thd good evening paul_away
18:35 thank you paul_away
19:47 kados thd-away: did you and paul fix what was being complained about?
20:02 chris or if thd is about
20:23 mason have i hung the cvs server?
20:23 chris no
20:24 mason oops, wrong channel
20:43 kados chris: I'm back
20:43 chris: was feeling a bit feverish earlier
20:43 but doing better now
20:43 chris http://opac.koha3.katipo.co.nz[…]Cdetail.pl?bib=29
20:43 fetching from zebra now
20:43 kados nice!
20:44 looks like you've got item details too
20:44 chris i wonder if while i was doing it i fixed thd's problem
20:44 yeah that all gets stuck in zebra by the import
20:45 its using my get_record()
20:45 kados not sure what the problem was exactly
20:45 chris you give it a biblionumber
20:45 and it gives you a marc record
20:45 kados if it was listing a 650 a x a x as 650 a x 650 a x then the answer's no
20:45 nice
20:46 chris ahh it was doing the opposite, lemme show ya
20:46 kados cool
20:46 chris http://opac.koha3.katipo.co.nz[…]Cdetail.pl?bib=29
20:46 thats what it was doing
20:46 kados ahh ... right
20:47 nice job then!
20:47 chris but im not sure if thats what he wanted, he'll read the logs i guess
20:47 kados seems fairly easy to develop with zoom :-)
20:47 yep
20:47 chris yeah
20:47 with marc anyway
20:47 kados chris: so we forgot to discuss the need to updatedatabase
20:47 chris: for the perl-zoom plugin
20:48 I spoke to paul about it yesterday
20:48 apparantly there is quite a bit of stuff in rel_2_2 updatedatabase that's been removed in head
20:48 I'm not familiar enough with cvs diff and patch to know how best to merge the right changes
20:49 what I _think_ I want to do
20:49 chris right
20:50 kados is merge in just the perl-zoom stuff
20:50 not sure if i need updatedatabase to update mysql to utf-8
20:50 or innodb
20:50 chris right i think leave those for 3.0
20:50 kados so zebra.cfg will have to not use utf-8 then
20:50 for that plugin
20:51 chris right
20:55 kados http://cvs.savannah.nongnu.org[…]nly_with_tag=MAIN
20:55 from paul's notes, it looks like the marcxml column
20:55 is related to utf-8 changes
20:55 chris ahh
20:56 kados looks like the two important commits
20:56 are 1.120 and 1.125
20:58 chris yeah those are the 2 big ones
21:00 ok, http://opac.koha3.katipo.co.nz[…]Ddetail.pl?bib=29
21:00 how do you set up isbd stuff?
21:01 kados just nab it from koha.liblime.com
21:01 in preferences
21:01 catalog tab
21:01 http://koha.liblime.com/cgi-bi[…].pl?tab=Catalogue
21:02 though I agree that's thd's extreme example :-)
21:02 chris :)
21:03 kados I asked on koha-zebra about commit and $Zconn
21:05 chris http://opac.koha3.katipo.co.nz[…]Ddetail.pl?bib=29
21:05 using get_record too now
21:05 kados nice!
21:12 chris right theres a bunch of commits
21:12 time for some food
21:12 kados cool
21:33 chris: when you get back from lunch, if you have time, could you walk me through the process of syncing updatedatabase so I can start doing QA on the perl-zoom plugin?
21:48 chris hmm
21:48 i think what we need to do is figure out what changes we need
21:49 i think the main one is adding the marcxml column
21:50 and then make a copy of the 2.2 updatedatabase
21:50 that does the making marcxml as well
21:54 kados right
22:40 chris sleep well
23:34 thd chris: If you are going to be revising the ISBD Perl script for Zebra, you should understand how ISBD for Koha 2 is backwards.
23:35 chris: are you there?
23:35 chris yep
23:35 wasnt planning on revising it yet
23:35 all i have done is make it fetch the data from zebra using zoom
23:35 rather than from mysql
23:36 thd chris: For future reference then
23:37 chris: In Koha 2, the ISBD configuration file tells Koha what to return from the record.
23:37 chris right
23:38 thd chris: In Koha 3 the record should provide the information about what to return while the configuration merely provides placement and joining punctuation.
23:39 chris k
23:39 so each record has isbd information stored with it?
23:42 thd chris: The problem with Koha 2 leads to 260 $a $b $a $b $c being represented as 260 $a $a $b $b $c.
23:42 chris: MARC records do not have ISBD information.
23:43 chris so if you have marc records, you cant display them in ISBD format?
23:43 in koha 3?
23:43 thd chris: MARC records have subfield order information.
23:43 chris ahh so you only use ISBD if you dont use MARC?
23:44 thd chris: actually, that order information always was in Koha 2 but usually ignored.
23:44 chris k
23:44 oh while you are here
23:45 thd chris: The problem is that the Koha 2 ISBD configuration actually specified the order of subfields instead of reading the order from the record.
23:45 chris http://opac.koha3.katipo.co.nz[…]RCdetail.pl?bib=4   has that fixed the problem you were having with 650?
23:46 thd: ahhh i get it now
23:46 that makes sense
23:47 thd chris:  ISBD display is to put some parts of the record in the correct place with correct joining punctuation.
23:47 chris right
23:48 thd chris: ISBD have a different order for the placement of the general parts of the record and what parts are important.
23:48 chris ok
23:50 thd chris: ISBD is much more of a standard for the traditional representation that used to appear on printed cards in the catalogue.
23:50 chris yeah just looking at
23:50 http://opac.koha3.katipo.co.nz[…]BDdetail.pl?bib=4
23:50 it does look very much like the old cards
23:50 do libraries usually define their own?
23:50 or use a standard layout?
23:52 thd chris: kados has found that as an easy user modifiable configuration his libraries liked to use it themselves to create a custom display.
23:52 chris yeah i can see that that would be desirable
23:53 thd chris: There should be something else for that purpose such as preferences for the detail view controlled by a framework.
23:54 chris: No need to have an ISBD display that does not follow the ISBD rules.
23:54 chris true
23:54 itd be nice to be able to create lots of views
23:55 like you can create frameworks
23:55 you could have a kids view
23:55 thd chris: That was merely the most accessible for librarians who might not have access to do more on the web services that kados provides.
23:55 chris that they could choose .. or a dvd view which shows the info relevant to do dvds
23:55 etc
23:57 thd chris: The users like to be able to change things themselves without asking a programmer/template designer to change a template.
23:57 chris yep
23:58 thd chris: your MARC example fixed the MARC problem.  However, there may be a problem with committing that code.
00:00 chris: That had been the original form of the MARC view.  Paul changed it after his libraries complained that repeating the field name was verbose.
00:00 chris ahh right
00:00 thd chris: kados has the opposite problem from his customers.
00:01 chris: Or prospective customers.
00:02 chris: A preference to choose between economical and verbose format is needed.
00:03 chris it was just commenting out some lines so i wont commit the fix in 2.2
00:04 thd chris: paul does acknowledge that even the economical form should have extra space between repeated subfields so as not to confuse all the subfields as collectively belonging to the same individual field..
00:04 chris hmmm i should increase that timeout limit
00:05 thd chris: There should probably also be an alternate very compact non-descriptive view with only the codes and no semantic labels.
00:05 chris there 5 minutes instead now
00:06 yep, that should be much easier with zebra
00:08 thd chris: Only the codes is the traditional MARC view for librarians who think that it is not a real library system unless it has a view that is as difficult to interpret as possible.
00:08 chris :-)
00:08 lets give them the codes in roman numerals :-)
00:08 thd chris: I set my timeout to 24 hours.
00:09 Romans were all fine librarians. :)
00:15 chris: something else about ISBD.  Even I cheated the ISBD standard in my complete MARC 21 Koha 2 ISBD configuration in a small way as a concession to readability with newlines for subjects that should probably be inline according to the standard.
00:15 chris ah ok
00:18 thd chris: It is a small amendment if someone wants it to be more difficult to read on a computer screen where there is less control over presentation than on a printed card for which the format was devised.
00:19 chris someone probably will :)
00:26 thd chris: ISBD is not a record exchange format where adaptation to the display medium will break its function.  A better ISBD configuration syntax would not need cheating for readability..
00:26 chris right
00:35 thd chris: If you do not commit the opac-MARCdetail.pl changes to rel_2_2 without taking the time to set up a user preference, will you commit it to HEAD or email it to koha at agogme.com ?
00:37 chris its committed to head
00:38 the commenting out at the bottom is what i changed
00:38 thd thanks chris
03:49 chris: I do not find any change committed in CVS HEAD to koha/catalogue/MARCdetail.pl since paul's "moving catalogue views to catalogue directory" over 4 weeks ago.
03:51 chris: Did you commit a change for MARCdetail.pl to HEAD in another location or did I misunderstand?
03:56 chris no i only committed it to opac-MARCdetail.pl
03:57 the same changes could be replicated to MARCdetail.pl easily enough though
03:57 theres just an if commented out
03:58 thd chis: I see you added a new file name?
03:58 chris i did?
03:59 opac/opac-MARCdetail.pl is what i changed
03:59 thd chris: opac-MARCdetail.pl did not exist before only MARCdetail.pl unless I am mistaken.
04:00 chris you are mistaken :)
04:00 its for the opac
04:00 not the intranet
04:00 its up to revision 1.11 in head
04:00 thd chris: I had the right name before as I used it.
04:01 chris in that file
04:01 lines 152, 153, 154 and 156
04:02 are the lines i commented out, to make the fix
04:02 the other changes are to do with zebra, so you dont want them for 2.2
04:04 thd chris: I had not thought clearly about whether the OPAC and the intranet would need different Perl files for comparable functions.
04:05 chris well all the code should actually be in a module
04:06 the scripts should only be very minimal
04:06 but we have more than enough to do, without refactoring too :)
04:07 thd chris: A module would be better with more comments in the code but no need to break everything at once.
04:09 chris yep
04:10 thd chris: I wish the squashed bugs would stay dead though.  I see behaviour change reintroducing them in different places.
04:11 chris bummer
05:15 osmoze hello
05:48 thd hello osmoze
06:18 chris evening paul and osmoze
06:19 paul hello chris.
06:19 (just answering russ email about kohaCon)
06:19 chris cool
06:19 http://opac.koha3.katipo.co.nz[…]RCdetail.pl?bib=4
06:19 and
06:19 http://opac.koha3.katipo.co.nz[…]BDdetail.pl?bib=4
06:20 fetching the data from zebra now
06:20 (running under apache2 and mod_perl2 as well)
06:21 took me a while to understand the pqf file
06:21 but i think i understand it now
06:22 i added
06:23 index.dc.identifier                     = 1=1007
06:23 paul I saw the commit
06:24 chris because i saw in bib1.att
06:24 paul but still unclear what it means :-(
06:24 dc = dublin core ?
06:24 chris att 1007            Identifier-standard
06:24 yes
06:24 paul what do we deal with dublin core here ?
06:24 chris and in our collection.abs
06:24 we have
06:25 melm 090$c      identifier-standard,identifier-standard:p
06:25 so now i can go
06:25 "identifier=$biblionumber"
06:25 paul what is strange to me is bib1 and dublin core is not the same thing.
06:25 right chris.
06:25 chris and it looks in 090c
06:25 kinda tricky
06:26 took me quite a long time to figure it out
06:26 paul indexdata doc is very large, but not always easy to understand.
06:26 chris yes
06:26 i agree
06:26 paul something else :
06:26 about utf8 : someone on a french perl mailing list told me that everything works well with http://search.cpan.org/~oyama/[…]SQL-0.08/MySQL.pm.
06:26 http://search.cpan.org/~oyama/[…]P-0.04/mysqlPP.pm
06:26 (both required)
06:26 but I get a :
06:27 DBI connect('database=head;host=12​7.0.0.1;port=3306','root',...) failed: Couldn't connect to 127.0.0.1:3306/tcp: IO::Socket::INET: connect: Connection refused at /usr/lib/perl5/site_perl/5.8.7/DBD/mysqlPP.pm line 109, referer: http://127.0.0.1/index_perso
06:27 chris hmmm
06:27 paul any idea why such a message occurs ?
06:27 [client]
06:27 port=3306
06:27 socket=/var/lib/mysql/mysql.sock
06:27 is my my.cnf
06:27 chris you can connect to it from the command line
06:28 paul why ?
06:28 (how)
06:28 chris sorry i mean can you?
06:28 eg
06:28 paul no, me sorry ;-)
06:28 chris mysql -uroot -ppassword -h127.0.0.1 head
06:29 paul of course I can.
06:29 chris so its definitely listening on that port then
06:29 hhmmm
06:29 paul right
06:29 i cant in fact
06:29 mysql -uroot -ppassword  head
06:29 is OK
06:29 chris telnet localhost 3306
06:29 paul but -h127.0.0.1 isn't
06:30 chris it might be only listening on unix sockets
06:30 paul telnet => KO
06:30 seems he listens only unix sockets.
06:30 chris right
06:30 check my.cnf
06:30 there might be skip-networking
06:30 if so, you can comment it out
06:31 paul no, there is none
06:31 chris hmmm
06:31 paul but show variables tells me
06:31 chris ohh bind-address ?
06:31 paul skip networking = ON
06:32 chris ahh, so somehow we have to switch that off
06:37 hi hdl
06:38 paul chris : i don't see how to switch skip-networking off
06:42 chris hmm in the my.cnf
06:42 is there a bind-address bit?
06:42 you could try
06:42 bind-address            = 127.0.0.1
06:42 and see if that works
06:44 paul [paul@bureau ~]$ mysql -uroot -h127.0.0.1 -p head
06:44 Enter password:
06:44 ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
06:44 :-(
06:44 chris darn
06:45 what version of mysql paul?
06:45 paul 4.1.12
06:47 chris and the script that starts it ... /etc/init.d/mysql (or whatever) doesnt have --skip-networking as an option?
06:49 paul it don't seems
06:50 chris its a puzzle
06:51 osmoze Paul, si tu commentes skip-networking et redemarre, ca donne quoi ?
06:52 (dans my.cnf)
06:52 paul skip_networking is ON by default osmoze
06:52 mysql don't restart if I add skip_networking=OFF
06:53 osmoze moi j ai pas de skip_networking mais juste un bind adress, si tu le commentes juste
06:54 dans ma conf :
06:54 # Instead of skip-networking the default is now to listen only on
06:54 # localhost which is more compatible and is not less secure.
06:54 bind-address            = 127.0.0.1
06:55 chris yeah thats what is in my conf also
06:55 osmoze so i have no skip_networking
06:55 chris thats 4.0.24
06:55 paul in [mysqld]
06:55 section
06:55 ?
06:56 osmoze yes chris, i ve the same
06:56 chris yes
06:57 paul here is my complete my.cnf :
06:57 [client]
06:57 port=3306
06:57 socket=/var/lib/mysql/mysql.sock
06:57 [mysqld]
06:57 datadir=/var/lib/mysql
06:57 socket=/var/lib/mysql/mysql.sock
06:57 port=3306
06:57 set-variable = key_buffer=64M
06:57 set-variable = max_allowed_packet=1M
06:57 set-variable = table_cache=256
06:57 set-variable = sort_buffer=8M
06:57 set-variable = record_buffer=2M
06:57 set-variable = myisam_sort_buffer_size=64M
06:57 set-variable = thread_cache=8
06:57 set-variable = sort_buffer_size=16M
06:57 #skip_networking=OFF
06:57 bind-address = 127.0.0.1
06:57 #set-variable = default-character-set=utf8
06:57 #default-character-set=utf8
06:57 # Default to using old password format for compatibility with old and
06:57 # shorter password hash.
06:57 # Reference: http://dev.mysql.com/doc/mysql[…]word_hashing.html
06:57 old_passwords=1
06:57 [mysql.server]
06:57 user=mysql
06:57 basedir=/var/lib
06:57 [mysqld_safe]
06:57 err-log=/var/log/mysqld/mysqld.log
06:57 pid-file=/var/run/mysqld/mysqld.pid
06:57 =====
06:57 done
06:57 and still skip-networking=ON
06:58 (and no connection through -h127.0.01
06:58 )
06:58 chris hmmm
06:58 thats /etc/mysql/my.cnf right?
06:58 paul it's /etc/my.cnf for me
06:59 chris is there anything in /var/lib/mysql ?
06:59 another my.cnf there?
06:59 paul a /var/lib/mysql/my.cnf ? no
07:00 chris what does ps axf | grep "mysql"
07:00 say
07:00 i get
07:00 paul confused
07:00  410 pts/6    S      0:00 /bin/sh /usr/bin/mysqld_safe --defaults-file=/etc/my.cnf --skip-networking --pid-file=/var/run/mysqld/mysqld.pid
07:00 ...
07:00 chris /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock
07:00 right
07:00 paul who adds the --skip-networking
07:00 ?
07:00 chris good question
07:01 what distro do you use?
07:01 paul mandriva 2006
07:01 chris hmm, dont know much about that sorry
07:01 in debian we have /etc/default/
07:02 which sometimes has configuration in it
07:02 paul in /etc/rc.d/init.d/mysqld, I see :
07:03        /usr/bin/mysqld_safe --defaults-file=/etc/my.cnf \
07:03            ${MYSQLD_OPTIONS:-""} \
07:03            --pid-file="${mypidfile}" >/dev/null 2>&1 &
07:03 MYSQLD_OPTIONS should be the culprit
07:03 chris looks like it
07:03 paul few lines before, there is :
07:03 get_mysql_option /etc/my.cnf datadir "/var/lib/mysql"
07:03 mmm... no, that's not it.
07:04 how to find who fills MYSQLD_OPTIONS ?
07:04 chris cd /etc
07:04 grep "MYSQLD_OPTIONS" -r *
07:04 maybe
07:05 that might work too :)
07:07 paul sysconfig/mysqld:# (oe) Remove --skip-networking to enable network access from
07:07 sysconfig/mysqld:MYSQLD_OP​TIONS="--skip-networking"
07:07 chris ah ha
07:07 good detective work
07:07 theres a trap for beginners
07:07 paul yep.
07:08 osmoze so it's good if you comment it ?
07:08 paul ok, skip networking is now OFF
07:08 checking connection
07:09 Trying 127.0.0.1...
07:09 Connected to bureau.paulpoulain.com (127.0.0.1).
07:09 Escape character is '^]'.
07:09 4
07:09 4.1.12Gr,"bL4`,dXk3XD*CSXB"
07:09 chris that looks more like it
07:11 paul gotcha ! many thanks
07:11 (the last problem was one with 127.0.0.1 and localhost, easy and already encountered many times ;-) )
07:12 chris :-)
07:12 paul hoping this mysqlPP will solve my utf8 problems ;-)
07:12 otherwise, we are loosing our time
07:13 chris but now, i must go to bed
07:13 good luck
07:13 paul good night.
07:13 i'll let you know any success or failure
07:13 osmoze good night chris

← Previous day | Today | Next day → | Search | Index

koha1