IRC log for #koha, 2006-08-04

← Previous day | Today | Next day → | Search | Index

All times shown according to UTC.

Time Nick Message
12:58 thd kados: are you there?
12:58 kados: I posted some more questions to the wiki page
12:59 kados thd: I'll take a look
13:19 tumer[A] kados:??
13:19 kados tumer[A]: I'm here
13:20 tumer[A] do not wait for me to finish installing this debian today
13:20 it will tkae some time
13:20 staff is tired
13:20 kados ok
13:20 did you get the OS installed at least?
13:20 tumer[A] i want you to see this
13:21 kados if so, you can send me ssh login and passwd and I can ssh in and finish the install
13:21 tumer[A] well not completely
13:21 tomorrow i will send you whatever you want
13:21 kados ok
13:22 tumer[A] with yaz client go in library.neu.edu.tr:9999
13:22 find john
13:22 format xml
13:22 show 1
13:22 that is a complex record
13:23 than do elem biblios
13:23 show 1
13:23 that is a bibliographic record
13:23 than do elem holdings
13:23 show 1
13:23 that is a holdings recod
13:23 kados tumer[A]: library.neu.edu.tr:9999/biblios?
13:24 tumer[A] no its default
13:24 kados [239] Record syntax not supported -- v2 addinfo ''
13:24 ahh
13:24 sorry
13:25 why is <record> the root for the biblio record?
13:25 but <holdings> is the root for holdings?
13:25 should't you have <biblio><record> ?
13:25 tumer[A] holdings is multiple holdings
13:26 kados very cool though
13:26 tumer[A] this way its fully marc compliant each <record>
13:26 kados right, I see
13:26 very nice
13:26 so when you edit a record
13:26 do you have to save the whole <koharecord> every time?
13:27 or can you modify <holdings><record> individually/
13:27 ?
13:27 tumer[A] all individually editable
13:27 kados sweet
13:27 tumer rocks!
13:27 as usual :-)
13:28 so where is the prob?
13:28 tumer[A] but ZEBRA is taking so much effort
13:28 no prop
13:28 kados ahh
13:28 tumer[A] just zebra crashes
13:28 kados just very slow?
13:28 ahh
13:28 well ...
13:28 i think we could:
13:28 get zebra running as yours is on linux
13:28 write a script to simulate catalogign processes
13:28 write another script to simulate searches
13:29 send that stuff to ID
13:29 and they _must_ fix it
13:29 in 15 days no less :-)
13:29 tumer[A] this way of indexing will be a litlle bit slower so says ID
13:29 kados right
13:29 because it's xpath?
13:29 have you been exchanging email with ID?
13:29 because I haven't gotten any of it
13:29 tumer[A] but i can index multiple xmls as one bunch from zebraidx as well
13:30 i have designed xslt sheets xsd sheets which i am going to send soon
13:30 kados you can index a bunch of xml?!?!
13:31 isn't that a feature not implemented by ID yet?
13:31 in zoom?
13:31 tumer[A] no not in zoom from zebraidx
13:31 kados ahh
13:31 still ... I thought it only worked for iso2709
13:31 with zebraidx
13:31 tumer[A] similar to our iso2709
13:32 kados did ID implement this for you/
13:32 ?
13:32 tumer[A] its almost the same speed as iso
13:32 kados ID did this? or did you?
13:32 tumer[A] no its their new Alvis filter
13:32 kados nice!
13:32 so have you been emailing support@id?
13:32 cuz I haven't gotten any ccs :(
13:32 tumer[A] no just reading the lists
13:32 kados ahh
13:33 very cool !
13:33 tumer[A] i did not get any support from than
13:33 kados k
13:33 tumer[A]: i also have a bit of news
13:33 tumer[A] apart from mike saying i cannot merge lists
13:33 kados http://wiki.koha.org/doku.php?[…]raprogrammerguide
13:34 check the Field weighting section
13:34 and the Multiple Databases section too
13:34 field weighting is really powerful!
13:34 tumer[A] kados:i played with those but could get much use out of them
13:34 specially the multiple database did not help
13:34 kados it's useful because you can tell zebra:
13:35 do a search on exact title and title for 'it'
13:35 tumer[A] i did not use weighting
13:35 kados weight by exact title
13:35 so the ones with exact title 'it' come first
13:35 I'm going to write a new CCL parser
13:35 tumer[A] very cool
13:35 kados that transforms every CCL query into a PQF query that deals with weight
13:36 so the librarians can specify where to weight the query
13:36 tumer[A] i thoght this section is "do not use in production yet!"
13:36 kados no, it's in 1.3, so it's stable
13:49 tumer[A] kados:one resaon for me to split the record like this is because i am going to prevent union catalog reaching holdings section
13:50 kados tumer[A]: right, makes really good sense
13:50 tumer[A] that part contains lots of nonpublic notes
13:50 kados tumer[A]: so they are saved separately?
13:50 tumer[A]: there are three indexes?
13:50 tumer[A]: I'm not up on the avis filter
13:50 tumer[A] no one index , one kohacollection record
13:51 differnent xslt sheets
13:51 the default sheet will only show the biblios
13:51 withot saying elem biblios
13:52 kados ahh ...
13:52 dewey ahh ... is that how 'snapshots' are done?
13:52 kados wow, that's really nice
13:52 tumer[A] other sheets will be out of bounds except from within koha
13:52 kados we could do _anything_!
13:52 tumer[A] yep
13:52 kados we could have a MODS stylesheet
13:52 or dublin core!
13:52 holy shit!
13:52 tumer[A] alraedy have it
13:52 kados holy shit!
13:52 tumer[A] DC-MODS -MADS
13:53 kados hehe
13:53 ok ...
13:53 tumer[A] i am dropping because of fatique
13:53 kados tumer[A]: get some sleep man :-)
13:54 tumer gets major props
13:54 tumer[A] its taking lots of time to design the indexing sheets
13:54 kados owen: so ... currently, we have:
13:54 tumer[A] i do not know a word of xslt
13:54 kados IP address, port, and database
13:54 so you can connect to a database and run queries
13:55 owen: tumer now has a filter added to the mix
13:55 owen: so instead of just pulling out raw marc
13:55 owen: we can pull out any xmlish data we want
13:55 owen: just by specifing an xslt filter
13:55 owen when connecting to what database?
13:55 kados any of them
13:55 so instead of:
13:55 pulling out a MARC record
13:56 creating a MARC::Record object for it
13:56 passing it in
13:56 looping through, getting out the good data for display
13:56 passing to the tempalte as a loop
13:56 writing html int he template
13:56 we can:
13:56 * query zebra for the data using a stylesheet
13:56 * display it directly in HTML on the page
13:57 all we need to do is have a xslt stylesheet defined for turning
13:57 MARC into HTML
13:57 this is groundbreaking stuff
13:57 owen Would you still need to pass that final-stage HTML to a template somehow?
13:57 kados yea
13:57 but not as a loop
13:57 just as a variable
13:57 owen Just as a chunhk
13:57 kados yep
13:58 owen Swank.
13:58 kados so the labels would be 100% customizatble
13:58 especially if the xslt was in turn a syspref :-)
13:58 owen I mean crap. Now I gotta learn XSLT.
13:58 kados yea, you gotta :-)
13:59 tumer[A] and owen please do i am trying xslt on trail and error basisi
13:59 kados so we can have all kinds of filters
13:59 one for OPAC display (maybe with certain fields hidden)
13:59 one for Intranet Display
13:59 one for the MARC editor
14:00 hehe
14:00 one for RSS
14:00 one for DC, one for MODS
14:03 thd kados: so this is what we had wanted
14:03 kados thd: yep :-)
14:04 thd kados: the only drawback is the size of XML for its impact on performance when exchanging XML files across the network
14:05 kados: I think that the major performance issue for the record editor is all the XML fields taking up so many bytes when transferred
14:07 kados: can we compress the XML before sending it without having redesigned basic protocols like Pines?
14:08 kados thd: yes
14:08 thd: JSON
14:08 thd: piece of cake
14:10 thd kados: what does JSON have to do with compression?
14:10 kados thd: json is essentially compressed XML
14:11 thd: it's what Evergreen uses
14:11 tumer[A] kados:here is the record schema http://library.neu.edu.tr/kohanamespace/
14:14 kados k
14:14 tumer[A] before i continue with designing the rest we need to aggree on the record design
14:15 kados ok ...
14:15 I have one question before we continue
14:15 tumer[A] its just an extension of MARC21XML as descbribed in loc.gov
14:15 kados right
14:15 a superset of it, right?
14:15 tumer[A] rigt
14:15 kados so my question is
14:15 tumer[A] right rather
14:16 kados can we at the same time, do 'duplicate detection'?
14:16 tumer[A]: do you understand what I mean?
14:16 tumer[A] duplicate of what?
14:16 kados in other words ... what about having:
14:16 <koharecord>
14:16 <bibliorecord>
14:16 <record>
14:16 </record>
14:16 <holdingsrecord>
14:17 </holdingsrecord>
14:17 </bibliorecord>
14:17 <bibliorecord>
14:17 etc.
14:17 so we not only group holdings within a biblio ... we also group biblios withing a koharecord
14:17 that way, when I search on 'tom sawyer'
14:18 the 'koharecord' will pull up that title, with multiple MARC records beneath it
14:18 does that make sense?
14:18 tumer[A] it does but very complicated
14:18 kados yes I agree
14:18 we may be able to use the FRBR algorithm
14:18 if it's too complicated, we can consider it for 4.0
14:19 tumer[A] FRBR ?
14:19 kados http://www.oclc.org/research/projects/frbr/
14:19 Functional Requirements for Bibliographic Records
14:19 tumer[A] ahh yes i saw that
14:19 kados tumer[A]: I'm just throwing this idea out there
14:20 tumer[A]: just brainstorming ... so feel free to call me crazy :-)
14:20 tumer[A] well its beyond me for the moment
14:20 kados k
14:20 no prob
14:20 tumer[A] currently we have:
14:20 <kohacollection>
14:20 <koharecord>
14:21 <recordMARC21>
14:21 </>
14:21 <holdings>
14:21 <recordMARC21holdings>
14:22 </>
14:22 <recordMARC21holdings>
14:22 </>
14:22 </koharecord>
14:22 </kohacollection>
14:22 kados right
14:22 tumer[A] kohacollection can take many koharecords
14:23 and index them all at once with zebraidx
14:23 kados nice
14:24 tumer[A] but to join all Tom sowyers together is perls job
14:24 kados yes
14:24 but my idea was not to join all tom sawyers together in a single recordMARC21
14:24 ie, the records _have_ to be separate
14:25 tumer[A] yes but everytime we add a new tom sawyer we have to find the previous and join them
14:26 anyway you brew on that i go to sleep
14:28 night all
14:30 thd kados: I wonder if putting everything that might be put into a single XML record make the XSLT too inefficient
14:31 kados: I was disconnected for the best discussion on #koha yet
14:31 kados here's my idea:
14:31 <kohacollection>
14:31 <biblio id="1">
14:31  <biblioitem id="1">
14:31    <recordMARC21/>
14:31    <item>
14:31     <recordMARC21holdings id="1"/>
14:31     <recordMARC21holdings id="2"/>
14:32    </item>
14:32  </biblioitem>
14:32  <biblioitem id="2">
14:32    <recordMARC21/>
14:32    <item>
14:32     <recordMARC21holdings id="3"/>
14:32     <recordMARC21holdings id="4"/>
14:32    </item>
14:32  </biblioitem>
14:32 </biblio>
14:32 </kohacollection>
14:34 thd kados: i think you could add duplicates of authority records and solve the authority indexing problem in Zebra
14:34 kados could be
14:35 here's a better scheme:
14:35 <kohacollection>
14:35    <biblio id="1">
14:35        <biblioitem id="1">
14:35            <recordMARC21/>
14:35            <recordMARC21holdings id="1"/>
14:35            <recordMARC21holdings id="2"/>
14:35        </biblioitem>
14:35        <biblioitem id="2">
14:35            <recordMARC21/>
14:35            <recordMARC21holdings id="3"/>
14:35            <recordMARC21holdings id="4"/>
14:35        </biblioitem>
14:35    </biblio>
14:35 </kohacollection>
14:35 thd kados: can everyone afford the CPU to parse very large XML records under a heavy load?
14:35 kados thd: parsing isn't too bad
14:35 thd: it's transport that kills you
14:36 thd kados: yes while I was disconnected you missed my posts about transport
14:36 kados so we do simple client detection ... if they have js, we pass the xml directly to the browser as JSON and let the browser parse it
14:36 otherwise, we parse it client side and just pass html
14:37 to the browser
14:37 thd kados: if we could digress back to the dull issue of transport for a moment
14:37 as you already have
15:47 kados: one moment while I check the log for my posts about transport while disconnected
15:48 kados ok
15:51 thd kados: so there is a method for transforming XML into JSON and then transforming it back to XML again losslessly?
15:52 kados: maybe there is no difference in what is transmitted for use by the editor because that is always the same size data for building an HTML page in JavaScript whether it starts as ISO2709 or starts as MARC-XML
15:54 kados: what exactly is the advantage of passing data to the browser in JSON?
15:54 kados: are you still here?
15:55 kados thd: if the client has javascript, the advantage is that the xslt processing can be done client-side
15:55 thd: and the transport of HTML + JSON is much less than HTML + MARCHTML
15:56 thd kados: is the difference large?
15:56 kados well ...
15:56 yes
15:56 thd well you did say much less
15:57 kados probably on average HTML + JSON will be about 20% the size of HTML + MARCHTML
15:57 thd kados: so does that raise the CPU requirements or RAM requirements of the client to process the XSLT efficiently
15:58 kados thd: not by much
15:58 thd: demo.gapines.org
15:58 thd: does that work for you?
16:00 thd kados: my suspicion is that they might be down for the client over processing 80% larger HTML + MARCHTML
16:01 s/80%/40%/
16:01 s/80%/400%/
16:01 s/80%/500%/
16:01 kados thd: does demo.gapines.org work well for you?
16:01 thd: the whole interface is client-side using JSON
16:02 thd kados: does that not require a download first?
16:02 kados: do I not have to install some XUL?
16:02 kados nope, not for the opac
16:03 just have javascript turned on in your browser
16:03 thd kados: OK so yes the OPAC works but it is rather slow for features that certainly have no need of being client side
16:04 kados thd: well whether to do it client-side sometimes could certainly be a syspref
16:04 thd kados: I expect it is much faster if it still works with JavaScript off
16:04 kados thd: i am 100% comitted to having a 'no javascript' option
16:05 thd: maybe faster on your machine, but definitely not on mine
16:06 thd kados: I wondered every time the correct tab disappeared after I used the back function in my browser on zoomopac.liblime.com
16:08 kados: I am accustomed to finding the form I had used with the old values for changing to do a new query, although, a session state could store the current query or link to a change your query option
16:09 kados: so there is no problem about recovering the original MARC-XML from JSON?
16:09 kados no
16:09 JSON is identical to XML in terms of storage capabilities
16:09 thd kados: we will have a one to one element to element correspondence?
16:10 kados no
16:10 JSON is just a compressed version of XML
16:10 it's identical in capabilities
16:11 thd kados: lossless compression? you answered no to both questions just now.  Did you mean to answer no the second time?
16:11 kados lossless compression
16:11 there is no problem about recovering the original MARC-XML from JSON
16:11 we will not have a one to one element to element correspondence
16:12 JSON is lossless compression of XML
16:12 thd kados: how can both of those statements be true?
16:12 kados ?
16:13 thd: do some reading on JSON, i don't have time to explain it all right now :-)
16:13 thd kados: do you have time to discuss something more exciting
16:14 ?
16:14 kados: by which I mean FRBR etc. in XML?
16:15 kados sure
16:16 but I don't think that's so simple unfortunately
16:16 thd kados: ok,: having exploded once already today
16:16 kados because there is no one-to-one correspondance between MARC and any of the functional levels in FRBR
16:16 thd kados: not simple, therefore, fun
16:16 kados which is why FRBR sucks
16:17 thd kados: you mean which is why MARC sucks
16:17 kados so to get FRBR working, you have to break MARC
16:17 yea ... that's what I mean :-)
16:17 but a FRBR library system couldn't use MARC other than on import
16:17 you can't go from MARC to FRBR then back to MARC
16:17 it's a one way trip
16:18 thd kados: you just need a good enough model and a large amount of
16:18 s/model/meta-model/
16:19 batched CPU time to find the FRBR relations in the data set
16:19 kados thd: but where do you store those relations?
16:20 not in the MARC data
16:20 you have to have a FRBR data model
16:20 that is separate from your records
16:20 and used only for searching
16:20 thd kados: and just when you thought that was enough there is FRAR and FRSR
16:20 kados what are those? :-)
16:20 authories and serials?
16:20 shit
16:21 librarians making standards--
16:21 thd name and subject authority relations respectively although I am not perfectly confident about the acronyms
16:23 kados: so right MARC 21 does not do authority control on all the needed elements often enough in the case of uniform titles or ever in many other cases
16:24 kados: but you do not need a place in MARC to store the relations because you can store them in your XML meta-format
16:26 kados: then you can change them easily by script in a batch process as you perfect your relation matching algorithm for overcoming what cataloguers never recorded explicitly
16:27 or never recorded consistently in controlled fields
16:27 kados right
16:27 well if you come up with a xml format for storing FRBR
16:28 and a script to create FRBR
16:28 from MARC
16:28 I'll write the OPAC for that :-)
16:29 thd kados: I think if we have a reasonable place for storing the relations in a meta-record even if we have no good enough script yet we can experiment by degrees
16:29 kados well ... we're gonna need some data
16:29 but I suppose we could start with like 5-6 records manually created
16:30 thd kados: we would actually have a working system that could provide the basis for the experiment rather than building one later and reinventing the meta-record model later
16:30 kados: how is the foundation coming along?
16:30 kados no news yet
16:33 thd kados: so the data for individual bibliographic records can stay in MARCXML while the relations are stored in a larger meta-records
16:33 kados hmmm
16:33 but you still have to search on the FRBR dataset
16:34 you can't just store the relations
16:34 thd kados: because of the current limitation on Zebra indexing we have to store all immediately linked records together in one meta-record
16:35 kados: meta records need to be work level records because of current Zebra indexing limitations
16:37 kados: and they need to contain all lower levels and linked authority records within them
16:39 kados: you have to search a database full of bibliographic records for the matching records at various levels and then test them for true matches first
16:40 kados: the search will may not be sufficient in itself
16:41 kados: you have to compare likely candidates for satisfying some MARC test
16:42 s/MARC/FRBR level/
16:43 kados: so initially your meta-records would be mostly empty place holders for where you would eventually store matching records
16:45 kados: yet if you have the system supporting the structure for the XML meta-record you do not have to write a completely new system when you have perfected your record matching script
16:45 kados right
16:45 but we need to:
16:45 1. create a XML version of FRBR that we can index with Zebra
16:46 thd kados: if you have to write a new system to do something useful with  the experiment you will be much further from the goal
16:46 kados 2. create some example FRBR records in that structure
16:46 3. define some indexes for the structure
16:46 4. write a OPAC that can search those indexes
16:46 i can do 3, 4
16:46 but not 1, 2
16:47 so if you do 1, 2, I'll do 3, 4 :-)
16:47 but now I need to get dinner
16:47 I'll be back later
16:47 thd kados: you left out FRAR and FRSR
16:49 kados :-)
16:49 be back later
16:49 thd when is later?
16:49 kados thd: an hour maybe?
16:49 but I won't have much time to chat ... I've got a ton of work to do
16:50 thd we both have a ton of work
18:50 kados thd: are you back?
18:51 thd: got an authorities question
18:51 thd: http://opac.smfpl.org/cgi-bin/[…]thorities-home.pl
18:51 thd: do a Personal Name search on 'Twain, Mark'
19:17 ai morning
19:18 can anyone give me an ideal how to config ldap with koha plz
20:03 thd kados: I had to buy another fan
20:08 kados: am I looking at uniform title authorities?
20:13 kados thd: I'm here
20:17 thd 100 10 $a Twain, Mark, $d 1835-1910. $t Celebrated jumping frog of Calaveras County. $l French & English
20:18 kados: that is from the name/title index
20:22 kados: so I think the issue is that what you have are postcoordinated authority headings which are not in NACO or SACO
20:23 kados: the more I think about the super meta-record the more I like it
20:23 kados: i think it can solve multi-MARC koha as well
20:25 kados: and multiple names, subject, etc. authorities databases from different languages
20:26 kados yea, it might
20:26 thd kados: it would not solve the issues intrinsically but provide facility for a system that could solve them in due course
20:26 kados yep
20:26 thd \
20:27 kados: so what I imagine is a Zebra database of super meta-records
20:28 a separate DB of MARC 21 bibliographic records
20:29 a separate DB of MARC 21 authority records
20:30 a separate DB of the same again every other flavour of MARC
20:31 a separate DB of Dublin Core records
20:31 I left out holding records above
20:31 a separate DB of OAI records
20:32 a separate DB of Onyx records
20:32 etc.
20:33 kados thd: there is no need to have them separate
20:33 thd: xslt can do transformations on the fly
20:34 thd kados: well no but I think if you kept your sources separate then you would be better able to identify your source of error
20:35 kados: you would not want to have your super meta-records coming up along with the source records you were trying to add to them or the other way around
20:38 kados: I suppose you could control that with an indexed value in the meta records but certainly you need to keep the different MARC flavour source records in different DBs because you cannot reliably tell them apart
20:39 kados: I think we should create a wiki scratch pad for the super meta-record format and the DB design and invite public comment
20:40 kados: we need a good design quickly because tumer has a single focus and is going to implement something immediately
20:41 kados: after he implements he will not have much desire to change things that he does not know that he needs
20:42 kados: comment?
20:44 kados: can we index different XML paths differently?
20:48 kados: i mean <whatever><bibliographic><syntax name="some_syntax"><100>  differently indexed from <whatever><bibliographic><syntax name="other_syntax"><100> ?
21:17 ai any idea how to make the ldap authentication on koha?
21:18 please
21:18 russ ai: i can't help you, but have you tried the koha mailing list?
21:19 i have see a number of posts re ldap over the past couple of weeks
21:21 there is a thread here
21:21 http://lists.katipo.co.nz/pipe[…]/2006/009750.html
21:21 ai thanks
21:22 russ
21:22 have U ever try that?
21:23 russ nope like i say not a tech person
21:23 ai i can see there r 2 authen.pm file
21:23 1 for ldap
21:23 1 for normal
21:23 r we just change the name around?
21:23 oki
21:23 cheers
21:24 russ http://www.koha.org/community/mailing-lists.html
21:41 thd ai: I do not know either because I do not use LDAP but the code for LDAP support has been much improved in the current cvs development
21:43 ai: there is a thread on the koha list or koha-devel list in the last circa 9 months where someone solved problems with the implementation after much frustration with the original non-standard manner in which LDAP was implemented
21:45 ai: the originator of the thread solved the problems and provided new code to fix them
00:05 kados sorry for the bugzilla spam everyone
00:06 I've gone over every single bug
00:06 (not enhancements)
00:06 chris no need to apologise for that
00:06 kados and cleaned everything up
00:06 chris thanks heaps for doing it
00:06 kados I think we've got a managable set to work with
00:06 48 total remain
00:06 chris cool
00:07 kados that includes all versions < branch 2.2
00:07 chris excellent
00:07 kados 15 blockers
00:08 chris right, we should squish those before 2.2.6 if we can
00:08 kados definitely IMO
00:08 I wrote a mail to paul
00:08 and the list
00:08 requesting this
00:09 chris cool
00:09 kados right ... time for a midnight snack :-)
00:10 mason stay away from the biscuits!
00:10 chris or cookies (for the north american audience)
00:11 kados hehe
00:55 chris, you about?
00:55 mason, you too?
00:56 just for the heck of it, i found the old version of PDF::API
00:56 and took a look at the old barcode system
00:56 it's got some nice features
00:56 http://koha.afognak.org/cgi-bi[…]codes/barcodes.pl
00:57 if you want to play with it
00:57 (seems to even work)
00:58 anyway, thought maybe the printer config and some of the js used on that page might be useful
01:52 paul_away: you up yet?
01:52 in a few minutes I bet :-)
01:53 Burgundavia kados: what are you doing up?
01:53 kados Burgundavia: I might ask the same of you :-)
01:55 Burgundavia: just hacking away as usual
01:59 Burgundavia kados: it is only midnight here, unlike in ohio
02:04 osmoze hello #koha
02:06 kados hi osmoze
02:06 osmoze :)
02:08 kados well ... I'm tired now :-)
02:08 I will be back tomorrow, but I have a meeting in the am, so later in the day ... may miss paul
02:09 paul_away: when you arrive, please weed through bugzilla spam and fine my mail about 2.2.6 release
02:09 s/fine/find/
02:34 thd kados: are you still up?
02:39 hdl hi
02:42 Burgundavia thd: 00:08 <kados> I will be back tomorrow
02:43 that was about 30 mins ago
02:44 thd Burgundavia: is tomorrow today or does kados know what day it is?
02:53 Burgundavia thd: that would be today, north american time
02:53 the 3rd
02:54 thd Burgundavia: does kados know that today is today or did he think that 30 minutes ago was yesterday still?
02:55 even if it was not actually yesterday still in his time zone
02:57 Burgundavia thd: by tomorrow I assume he meant in about 9/10 hours from now
03:01 thd Burgundavia: I often pay no attention to timezones or time.
03:01 unless I have to do
03:01 Burgundavia work in open source for long enough and you have to get the concept
03:40 hdl paul : working on reports today.
03:41 kados : see my response to your acquisition bug : Does that help ?
03:45 paul hdl : merci de t'occuper de certains bugs. je travaille sur les droits
03:45 hdl droits ?
03:51 paul #1039
03:51 on se dit ce que l'on nettoie au fur et à mesure qu'on s'y colle, OK ?
04:45 thd paul: are you there?
04:46 paul yep
04:47 thd paul: did you read the logs about meta-records yesterday?
04:47 paul thd: nope
04:48 thd paul: I think Koha can save the world
04:48 paul: http://wiki.koha.org/doku.php?[…]er_meta_record_db
04:48 paul koha can save the world ? I thought this has been done 2006 years ago ...
04:49 thd paul: well save it again
04:49 paul lol
04:51 thd paul: I have become too tired to start writing a DTD outline
04:51 paul so, go to bed, i't's almost time for frenchies to go for lunch
04:51 thd paul: yes
04:52 paul: look at the logs for discussion between tumer and kados about meta-records while I was disconnected
04:53 yesterday
04:54 paul: it culminated in kados explodes (with joy)
06:44 kados paul, you around?
06:44 I'm trying to get acquisitions working
06:49 paul: still, after closing any basket, when I click on 'receive', and enter in a 'parcel code' i don't get any biblios listed
06:49 paul: you can try it here: http://koha.smfpl.org
06:49 paul: in both default and npl the behavior is the same
06:51 hmmm ...
06:51 now I try searching from the receipt page and I find an item I ordered
06:52 but there is no submit button to save it
06:53 so I click on edit, and now I'm back where I started it seems (even though this basket is closed)
07:30 hdl kados
07:51 paul hello dewey
07:51 dewey : who is kados
07:51 dewey rumour has it kados is becoming a true Perl Monger
07:51 paul dewey who is paul ?
07:51 dewey you are preparing to issue a release while the NPL templates are not working for the record editor now.
07:52 toins hello dewey
07:52 dewey salut, toins
07:52 paul (/me is doing a demo for it's neveu)
07:53 hdl dewey : you ugly one !!
07:53 dewey hdl: what?
07:54 toins dewey, tranlate from french hello
07:54 dewey toins: i'm not following you...
07:54 toins dewey, translate from french bonjour
07:54 dewey toins: hello
08:13 kados paul: and hdl: I have updated smfpl with paul's recent commit
08:13 paul kados
08:13 just commited 1st fix to acquisition problem
08:13 the 2nd one arriving in the next minutes
08:14 (i've found where it comes from)
08:14 kados great ... the first one means now I see pending orders!!
08:14 wohoo!
08:15 and I can 'receive' a title, and save it ... wohoo!
08:15 paul ok, 2nd part fixed too (& commited)
08:16 kados what is the second part?
08:16 javascript errors?
08:16 paul I think we will have to get rid with this strange "catview"
08:16 no, when you search for an order in the form
08:16 kados ahh
08:16 paul (it worked when you selected from the pending list, but not when you searched)
08:16 kados ok ...
08:16 paul that's why I missed your problem
08:16 kados smfpl is updated
08:17 paul thx to have made a very detailled wlakthrough, otherwise I could have investigated a lot !
08:17 bug marked "fixed".
08:17 kados thanks
08:17 I'll update npl templates
08:18 paul: so the fix is to just delete the catview?
08:19 paul yep.
08:19 I don't know/think the catview is useful
08:19 kados ok
08:23 paul: the acqui search for 'from existing record' is a title search, right?
08:23 paul iirc yes
08:23 kados ok ... I will update my wiki
08:24 paul (and not a catalogsearch one. It's just a select from biblio where title like "...)
08:24 kados ahh, ok
08:27 paul: did we agree at dev week to have rel_2_2 use tabs in authorities editor?
08:27 paul: i can't remember
08:27 because it's quite hard to use authority editor
08:28 (even without the bugs I reported, it's hard )
08:30 ok guys, I have to get to a meeting
08:30 I'll be back in several hours
08:30 thanks for bugfixing!
09:15 johnb_away /nick johnb
10:22 paul hello owen
10:22 owen Hi paul
10:23 Everyone's busy busy busy these days!
10:23 paul kados & hdl/me are doing a competition : he open bugs, we close them.
10:23 owen :D
10:23 paul the 1st who resign has won :-D
10:57 thd paul: I left out the most important part of the purpose section previously at http://wiki.koha.org/doku.php?[…]er_meta_record_db
10:57 kados: see http://wiki.koha.org/doku.php?[…]er_meta_record_db
11:00 owen Heh... best bugzilla cleanup quote so far: "processz3950queue needs luvin"
11:48 thd johnb: are you present?

← Previous day | Today | Next day → | Search | Index

koha1