← Previous day | Today | Next day → | Search | Index
All times shown according to UTC.
Time | Nick | Message |
---|---|---|
13:14 | slef | has this meeting slipped by an hour? |
14:03 | kados | slef: still about an hour to go |
14:03 | slef: http://tinyurl.com/c2ter for the time in your area | |
14:20 | T-minus 40 minutes to Searching Group Meeting | |
14:20 | Read up on Zebra: http://indexdata.dk/zebra | |
14:20 | Read up on CQL: http://www.loc.gov/z3950/agency/zing/cql/ | |
14:28 | slef | Thought meeting was 19:00 UTC. Read about CQL. Seems complicated compared to trends, but not looked for competitors (time! :-/ ) |
14:35 | kados | slef: the meeting _was_ at 19:00 ... but paul posted that he couldn't make it so we rescheduled to 20:00 (BTW: what's the diff between UTC and GMT?) |
14:40 | paul | hello kados & slef |
14:43 | kados | hi paul |
14:44 | thd | Why do all Open WorldCat searches have 'england' in the query string? |
14:46 | paul | hello owen. |
14:46 | owen | Hi paul |
14:46 | kados | T-minus 14 minutes till Searching Group Meeting |
14:47 | add your stuff to the agenda: | |
14:47 | http://www.saas.nsw.edu.au/koh[…]ndAndNotes05jun21 | |
14:54 | owen | paul: thanks for the reminder about the template tags and the translator. I know I have a lot of old instances of that in the NPL templates. I'll try to weed them out. |
14:56 | kados | t-minus 4 minutes |
14:56 | chris | morning |
14:56 | paul | hello chris |
14:56 | kados | Try out Zebra: http://liblime.com/zap/advanced.html |
14:56 | hi chris | |
14:58 | paul | (on zap/advanced.html, paul poulain server is up, but for a reason not clear for instance, results are not shown. results are in the other zap page -the one that joshua will remind us now ;-) ) |
14:59 | kados | heh |
14:59 | it's http://liblime.com/zap/try.html | |
14:59 | but paul what's the IP address, port and db name ? | |
14:59 | maybe I've made a mistake | |
15:00 | paul | bureau.paulpoulain.com:2100/Default |
15:00 | and display=usmarc works fine | |
15:00 | even if the biblio is in unimarc in fact. | |
15:02 | kados | OK everyone ... welcome to our first Searching Group meeting |
15:02 | Our agenda is here: | |
15:02 | http://tinyurl.com/bw86r | |
15:02 | (please add to it asap if you've got something to cover that's not listed) | |
15:02 | who's present? | |
15:04 | chris | i am |
15:05 | kados | ok ... so is anyone missing? |
15:05 | paul | francoisl expected to be here, but seems we only have his computer... |
15:05 | kados | paul, chris, slef(at dinner), owen and me |
15:06 | FrancoisL wrote me that he couldn't make it today | |
15:06 | Ok ... well let's get started then | |
15:07 | basically three things to cover: | |
15:07 | Zebra | |
15:07 | OpenSearch | |
15:07 | CQL | |
15:07 | I think Zebra is the biggie | |
15:07 | so let's start with it | |
15:07 | paul | ok for me. |
15:08 | chris | yep |
15:08 | kados | so chris's comments listed on the agenda |
15:08 | are maybe a good place to start | |
15:09 | chris wanna expand on that? | |
15:09 | chris | a couple of points |
15:10 | slef | can it index multiple MARC types in one index? |
15:10 | kados | slef: yep |
15:10 | chris | probably obvious, is that we have 2 audiences for the search, the opac and the librarians .. and that we dont want to sacrifice accuracy for speed |
15:10 | paul | multiple MARC types in 1 index ? |
15:10 | what do you mean ? | |
15:11 | kados | chris: I agree |
15:11 | our priorities should be like googles: | |
15:11 | 1 accuracy | |
15:11 | 2 speed | |
15:12 | chris: are you concerned that Zebra will not be accurate? | |
15:13 | chris | not really, im just concious that we will have to write a wrapper for it, to check the stuff zebra wont check (item status) and we will need to make suer thats accurate |
15:13 | theres nothing worse than the opac telling you a book is on the shelf when it isnt :) | |
15:13 | slef | paul: index this MARC21 lbrary and tha BLMARC and that UNIMARC one and search across all. |
15:13 | paul | slef : ok. |
15:14 | chris | from our initial investigations, zebra looks to be fantastic for indexing and searching bibliographical data |
15:15 | kados | right |
15:15 | chris | i think what we need to do now is maybe build a prototype of how it will work with koha |
15:16 | kados | IMO we need to look at what kinds of data we shoudl expect Zebra to return and what we want from the RDBMS |
15:16 | paul | i have some ideas for this. |
15:16 | chris | yep |
15:16 | paul | i wanted to write a sheet, but could not find time. |
15:16 | do you want me to explain my ideas ? | |
15:16 | kados | yes please do |
15:17 | paul | we have 2 different informations : biblio & item level informations. |
15:17 | so, the question is do we store both in a single MARC record or not ? | |
15:17 | I think we should, at least in zebra. | |
15:17 | so, when zebra find a record, he can return it without more code. | |
15:17 | kados | that would be ideal |
15:18 | paul | but in Koha itself, i think we should still have both informations. |
15:18 | so we could : | |
15:18 | - have a biblio MARC record | |
15:18 | and a item MARC record | |
15:18 | chris | the only problem i can see with that ... is that you would need to be reindexing in zebra constantly |
15:19 | kados | chris: actually, zebra supports updating |
15:19 | chris | as the status of items (on loan etc) will be changing all the time |
15:19 | paul | good point to chris, should do some tests. |
15:19 | kados | yep agreed |
15:19 | so "is zebra updating fast enough not to slow down circ" | |
15:19 | chris | yeah, i like the idea, im just scared it will slow circulatioon |
15:19 | paul | in my idea we can get rid with marc_*_table and marc_word. |
15:19 | kados | marc_word for sure! ;-) |
15:20 | paul | and store raw iso2709 data in biblio & item tables. |
15:20 | chris | hmmm |
15:20 | that sounds good paul | |
15:20 | paul | biblio.pm being responsible to request zebra-update when one or the other is modified. |
15:20 | chris | but will mean a big rewrite of the C4::Biblio eh? |
15:20 | paul | requires some more coding to change item status in item marc record. |
15:20 | probably not so big. | |
15:20 | Biblio.pm has been made by a good coder ;-) | |
15:21 | chris | heh |
15:21 | kados | welcome |
15:21 | chris | the way i see it, there are 2 ways we can do this |
15:21 | paul | the biggest deal i think is to store item informations. |
15:21 | in UNIMARC, we have the "recommandation 995" that deals with those informations. | |
15:21 | dunno in MARC21 | |
15:22 | chris | pauls idea (which i like) but perhaps has some issues (circulation, tieing koha to zebra) |
15:22 | or implement zebra as a plugin | |
15:22 | kados | how would that work? |
15:23 | chris | have a systempreference, use zebra searching |
15:23 | paul | explain your ideas ? |
15:23 | chris | then have some routines that search for bibliographical data using zebra, and fetch the item data from the issues and items tables |
15:24 | and a cron job that updates the zebra index | |
15:24 | kados | one prob with that that I can see |
15:24 | is what to do with the search api | |
15:24 | because it'll mean two searching methods to maintiain | |
15:24 | chris | yep |
15:25 | kados | I'd prefer to simplify things and just use one api (we're short on maintainers( |
15:25 | paul | that was my idea 1st |
15:25 | but i'm not sure it's the best one. | |
15:25 | chris | yep me either |
15:25 | paul | anyway, my idea would not change anything on Biblio.pm API, so we could have 2 differents Koha DB |
15:26 | one with Koha internal search, one with Zebra | |
15:26 | (internal search being with marc_word as in 2.2) | |
15:26 | chris | right |
15:26 | paul | but really not sure it will be worth the effort. |
15:26 | chris | me either |
15:26 | kados | I'll third that |
15:26 | chris | i dont think we will know until we try |
15:26 | paul | my opinion will be definetly done once we will see better how complex it's to parameter zebra... |
15:27 | thd | what would be lost from the current search api in substituting zebra? |
15:27 | kados | thd: the idea is that we would actually gain functionality with Zebra |
15:28 | thd | kados: with nothing lost? |
15:28 | kados | thd: right |
15:28 | thd: there's not much to lose ;-) | |
15:28 | thd | kados: he he |
15:28 | chris | ok, so i think a plugin will be out |
15:29 | but the choice will be, zebra for all (need to test .. does this make circ slower) or zebra for biblio, database for item info | |
15:29 | kados | yea .. I'm leaning towards the second one |
15:29 | paul | circ won't be slower chris. |
15:29 | thd | paul: I am too new to know well and too anonymous at the moment. |
15:30 | kados | you mean circ can get slower? |
15:30 | :-) | |
15:30 | chris | paul: if we dont slow circ .. then is there a chance our search will return the wrong results? |
15:30 | ie, if we dont make an issue finished when the index is updated, then the index will be wrong for a period of time | |
15:31 | kados | I'd like to see our first implementation of Zebra 'double-check' the item statuses in Koha |
15:31 | before returning results | |
15:31 | paul | about circ speed : |
15:31 | i think circ will be as fast as for instance. | |
15:31 | kados | the status check is very quick in SQL (it's a 'factual' dataset) |
15:31 | paul | because zebra index update will be run in crontab, maybe 10mn after the circ |
15:31 | chris | ahh so theres my fears |
15:32 | that i issue a book | |
15:32 | then 2 mins later somene searches for it, on the opac | |
15:32 | it says its on the shelf | |
15:32 | they cant find it | |
15:32 | kados | right ... my fear as well |
15:32 | paul | not necessary. |
15:32 | chris | they ask a librarian, and the librarian says oh we issued it |
15:32 | they get angry and write paul a letter :-) | |
15:32 | kados | heh |
15:33 | paul | we could have 2 behaviours : checking item from Koha opac means checking koha circ DB just after retrieving the record. |
15:33 | kados | yea ... those patron flame letters are coming in by the hundreds ;-) |
15:33 | chris | right |
15:33 | paul | checking item from opensearch or something like that means having an unperfect result maybe |
15:33 | chris | ahh i getcha |
15:33 | yeah that sounds good | |
15:34 | kados | what's the difference? |
15:34 | slef | Are we on 1. Zebra? |
15:34 | kados | (OPAC checks koha tables for searches?) |
15:34 | slef: yep | |
15:34 | chris | yep |
15:34 | paul | when someone checks from Athens university & see "book available", he needs at least 10mn to arrive to Athens PL. |
15:34 | kados | yea ... I like that idea |
15:35 | yep | |
15:35 | :-) | |
15:35 | paul | and in the mean time, the book has been issued ;-) |
15:35 | kados | heh |
15:35 | chris | :) |
15:35 | i think koha + zebra will help with consortia too | |
15:35 | paul | consortia ? |
15:36 | chris | multiple libraries with a unified bibliographical catalog |
15:36 | thd | the book will still show on on the shelf if it is uncharged but in a patrons hands |
15:36 | kados | consortium is a group of libraries collaborating (consortia => library => branch) |
15:36 | thd: good point ... this happens already | |
15:36 | chris | yep |
15:37 | kados | zebra will help with consortia |
15:37 | ok ... so who can make our ideas happen? | |
15:38 | paul | i volunteer to take care of Biblio.pm package rewritting. |
15:38 | chris | excellent |
15:38 | kados | great! |
15:38 | thanks paul | |
15:38 | paul | but not in a short delay. |
15:38 | chris | i can help with rejigging the opac |
15:38 | kados | any idea of a timeframe? |
15:38 | I can look at zebra paramaters/customization | |
15:39 | (the indexdata folks will be at ALA in a nearby exhibit and I plan to rack their brains when things are slow) | |
15:40 | ok ... so two other points are: | |
15:40 | CQL | |
15:40 | chris | sweet |
15:40 | kados | opensearch |
15:40 | thd | will merely implementing allow searches and links to work properly for synthetic subjects? |
15:40 | paul | about delay : during summer I hope. |
15:40 | kados | paul: great! |
15:40 | paul | but my main problem, for instance, is to understand how to deal with UNIMARC... |
15:41 | hdl should be able to work on this in 2 weeks | |
15:41 | kados | thd: not sure I understand 'synthetic subjects' |
15:41 | paul: ok ... sounds good | |
15:41 | thd | sythetic 'science -- methodology' not represented well in koha |
15:41 | kados | chris: there's a good 'embedding zebra' document that may be a good place to start |
15:42 | chris | cool |
15:42 | kados | thd: I'm not sure what that means |
15:42 | slef | thd: can you give a reference? |
15:43 | thd | kados: marc 650a -- 650x -- 650y -- 650z as a compund subject |
15:44 | kados | thd: ahh ... you're talking about 'see also' feature in koha? |
15:44 | paul | no |
15:44 | thd | kados: yes |
15:45 | kados | thd: that seems to work ok if you've got it setup |
15:45 | paul | he's talking about subjects splitted in more than 1 subfields. |
15:45 | kados | ahh |
15:45 | paul | for example, in UNIMARC, $x / $y / $z are subdivisions |
15:45 | of a subject. | |
15:45 | for example : | |
15:45 | $a Europe | |
15:45 | $x France | |
15:45 | $x Marseille | |
15:45 | and | |
15:45 | $a USA | |
15:46 | $x Ohio | |
15:46 | $x Nelsonville | |
15:46 | kados | ahh ... I see ... |
15:46 | thd | kados: if science -- methodolgy koha only sees science in the see also |
15:46 | paul | in Koha 2.2.x, they are poorly managed in normal view. |
15:46 | kados | yes ... Koha used to have a nice subject index when a subject search was returned |
15:46 | I think we lost that in 2.2 | |
15:47 | should be easy to bring back | |
15:47 | paul | not so sure. |
15:47 | thd | kados: will implementing zebra bring it back? |
15:47 | kados | thd: not automatically |
15:47 | paul | the best, I think, would be to be able to say |
15:47 | "subject = 650$a -- 650$x -- 650 $y" | |
15:47 | in marc <=> non marc mapping | |
15:47 | but that's not so easy... | |
15:48 | kados | ok ... so can we move on to CQL? |
15:48 | paul | yep. |
15:48 | thd | kados: yes |
15:48 | kados | any reactions? |
15:49 | slef: I know you had a concern | |
15:49 | the more I read up and learn abotu CQL the better I like it | |
15:49 | slef | Would we be expecting end-users to construct that syntax? |
15:49 | kados | nope |
15:49 | that's the beauty of CQL | |
15:49 | it only requires the term | |
15:49 | slef | Also, the more I look, the more I suspect, as the LoC CQL site doesn't seem to have useful references |
15:50 | kados | the complex query syntax is there if you need it |
15:50 | slef | So, users would be inputting that for complex queries? |
15:50 | kados | slef: right |
15:50 | thd | CQL is great but I had been concerned about the agenda suggesting it might replace MARC rather than comlpement it |
15:50 | kados | and Zebra has mappings for CQL to RPN |
15:50 | (RPN being default for Z39.50) | |
15:51 | slef | The rest of searching seems to have been going towards things like author:bloggs country:uk for field searches, default to and, and simple leading - for nots. |
15:51 | kados | thd: CQL is just a query method ... MARC is a storage method |
15:51 | thd: so one can't replace the other ;-) | |
15:51 | slef: another strength of CQL is namespaces like in XML | |
15:51 | slef | Unfortunately, I don't know what that : style is called formally and searching for query languages brings back lots of XML-related stuff. SPARQL is all well and good, but not suitable for this. |
15:52 | kados | (they call them 'context sets') |
15:52 | so our searching is extensible in other words | |
15:52 | slef | With a while longer, I may be able to express my point better, but that's not really been possible for a few weeks. |
15:52 | kados | paul: have an opinion? |
15:53 | slef: I'm all ears | |
15:53 | paul | no |
15:53 | slef | What namespaces might we want to use? |
15:53 | kados | well we could use the default |
15:54 | and there are a number of open context sets out there as well | |
15:54 | or we could invent our own | |
15:54 | thd | kados: I understood well but what does this quote from the agenda mean " should it replace marc tables in Koha?" mean? |
15:54 | kados | thd: that refers to Zebra I think |
15:54 | slef | kados: can you give me an example? |
15:54 | thd | kados: but zebra would not replace marc either would it? |
15:55 | kados | slef: http://zing.z3950.org/srw/bath/2.0/#2 |
15:55 | paul | thd : zebra will replace marc biblios. |
15:55 | kados | slef: that's the 'bath context set' |
15:55 | paul | internal marc storage of biblio |
15:55 | kados | thd: we will still use MARC ... it's just the way that we store it that's different (in mysql or in textual form) |
15:56 | thd | kados: in my experience textual databases are much easier to manage for many tasks |
15:57 | slef | kados: so users would have to start queries with >bath="http://zing.z3950.org/cql/bath/2.0/" if they wanted to search on the holding institution?! |
15:57 | kados | slef: no ... that's handled server side of course |
15:58 | just like it would be if we were doing an xml namespace | |
15:58 | (which I hope we are with opensearch) | |
15:58 | chris: any words about CQL? | |
15:59 | chris | i dont really have an opinion at this point |
15:59 | i see it as kinda secondary | |
15:59 | slef | anyway, my general feeling is that this is too complicated to expose anywhere outside the backend and even then it looks like it should be kept away from internal interfaces |
16:00 | kados | ok ... well let's put it asside for now |
16:00 | slef | I feel we should be moving towards more search-enginey type freeform query languages if we can. Unfortunately, I can't express that well yet. |
16:00 | kados | slef: I agree completely |
16:00 | slef: which is why I like CQL ;-) | |
16:01 | ok ... so how about opensearch (5 more mins?) | |
16:01 | any opinions? | |
16:01 | slef | kados: CQL looks to me about as far from that as you can get without using XML or a programming language syntax |
16:01 | kados: a9 is amazon? | |
16:01 | kados | slef: yep |
16:02 | slef | so, this is likely to be patent-encumbered? |
16:02 | kados | http://liblime.com/opensearchportal.html |
16:02 | slef: it's an open standard | |
16:02 | slef | (not a worry for me or paul yet, though) |
16:02 | kados | (note that it only works well in mozilla) |
16:02 | the Evergreen folks (particularly Mike Rylander) and I have been mulling over | |
16:02 | the idea of ILL | |
16:02 | slef | kados: that page does nothing here |
16:03 | kados | slef: using mozilla? |
16:03 | slef | using lynx |
16:03 | kados | slef: yep ... need mozilla |
16:03 | thd | kados: what is the problem for other browsers? |
16:03 | paul | kados, could you explain what we could use this for ? |
16:03 | kados | thd: it's just a proof-of-concept |
16:03 | paul | javascript problem it seems. |
16:03 | slef | well, thank you from my poor eyesight :P |
16:03 | kados | paul: sure |
16:04 | so the idea is that we extend the boundries of opensearch namespage | |
16:04 | namespace even | |
16:04 | rach | they aren't up yet |
16:04 | kados | to allow ranking |
16:04 | of results | |
16:04 | so if you change the "Display style" to "Merged" | |
16:04 | in the above link | |
16:05 | you'll see what I mean | |
16:05 | the patron sees a list of results from many institutions | |
16:05 | all the same 'kinds' of items apear in the same column | |
16:05 | slef | stop taunting me. |
16:06 | kados | we're still working out the details of how exactly to taxonomize the groups |
16:06 | but we've identified at least two kinds of items | |
16:06 | paul | ok, I think I understand. But what will we use this for in Koha ? |
16:06 | kados | physical items you can check out somewhere |
16:06 | paul | multiple catalogue querying ? |
16:06 | kados | and items you can link to electronically |
16:06 | paul | KOha + other catalogues |
16:06 | ? | |
16:07 | kados | paul: exactly ... catalogs AND electronic databases AND journal dbs AND web AND local collections ... list goes on and on |
16:07 | so with the above link | |
16:07 | you've got three ILS catalogs | |
16:07 | paul | and what protocol does opensearch use to query databases ? |
16:07 | kados | a journal database (CUFTS) |
16:08 | paul: opensearch is http-based GET | |
16:08 | and returns results in RSS format | |
16:08 | slef | kados: how can opensearch be an open standard when it builds on Harvard's Really Simple Syndication (aka RSS 2.0) and its copyright "is the property of A9.com"? |
16:08 | paul | the "queried DB" must support what ? opensearch standard ? |
16:09 | RSS. | |
16:09 | kados | paul: ideally yes |
16:09 | paul: but even if they don't | |
16:09 | paul: we can translate Z39.50 results into opensearch results very easily | |
16:09 | paul: (in fact that's what my portal does for NPL's dataset) | |
16:10 | thd | kados: does a9 have any support for openurl? |
16:10 | paul | is all of this in Perl ? |
16:10 | or just html/javascript client side ? | |
16:11 | kados | paul: the proof-of-concept is a mixture of perl and javascript |
16:11 | paul: (but the page is just html) | |
16:11 | paul | ok, good. |
16:11 | kados | paul: the sites it's querying use perl server-side to generate the XML |
16:12 | slef: we're expaning on the namespace so all the usual rules apply | |
16:12 | thd: a9 doesn't ... but there's no reason that opensearch can't | |
16:12 | paul | so, you lpan to use this for OPAC. And in koha-db, we add a table where we store "opensearch servers to query", and, if the user request, we extend a search to other catalogues. that's it ? |
16:12 | kados | thd: in fact, the CUFTS listing there is an openurl resolver for journals |
16:13 | paul: basically | |
16:13 | paul: when we get NCIP going | |
16:13 | paul: we can go another step | |
16:13 | paul: and let users 'request' items from other libraries too | |
16:14 | paul | that's where ILL arrives. |
16:14 | slef | kados: RSS 2.0 doesn't support XML namespaces, always needing rss in the default namespace. |
16:14 | paul | ok, got it. |
16:14 | kados | paul: exactly |
16:14 | slef: seems to be working ok sofar ;-) | |
16:15 | slef: just because we need rss in the namespace doesn't mean we can't expand it | |
16:15 | slef: so if you look at this:http://search.athenscounty.lib[…]opensearch?q=cats | |
16:16 | that's the XML results for a generic search on 'cats' | |
16:16 | using opensearch | |
16:16 | with a new namespace OpenILL | |
16:16 | that Mike Rylander and I have beeen working on | |
16:16 | slef | kados: "The elements defined in this document are not themselves members of a namespace" (Really Simple Syndication spec) |
16:16 | kados | (right now it just handles the relevance ranking) |
16:17 | slef: the namespace is listed in the link above | |
16:17 | <rss version='2.0' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:openIll="http://open-ils.org/xml/openIll/1.0"><channel> | |
16:18 | so two namespaces ... opensearch and openIll | |
16:19 | anyway ... meeting seems to be dying down | |
16:19 | :-) | |
16:19 | rach | but am happy to offer moral support |
16:19 | slef | kados: and what namespace is rss and @version in? |
16:20 | kados | slef: dunno |
16:20 | thd | kados: dying only because I could not get you demo to work in firefox earlier |
16:21 | slef | it's not... it's disembodied junk floating in xml |
16:21 | kados | thd: hmm ... sure you've got javascript enabled? |
16:21 | slef: so that applies to a9.com too then | |
16:21 | thd | kados: yes and it did nothing but impair keyboard shortcuts when I tried |
16:22 | owen | The demo works fine for me in firefox (Win, 1.0.4) |
16:22 | kados | thd: well ... the page requires javascript to work |
16:22 | slef | kados: quite likely. The RSS 2 crowd are better salesmen than the RDF ones. |
16:22 | (RDF, RDF Site Summary/RSS 1, and Semantic Web) | |
16:22 | kados | thd: so if you can enable javascript for a minute you'll see the demo |
16:23 | slef: right | |
16:23 | slef | unfortunately, it's building on shaky foundations and stuff breaks when you stretch it far enough |
16:23 | thd | kados: could it be an OS issue? I am using Debian Sarge presently? |
16:23 | kados | slef: every six months or so I forget how all the rss stuff works |
16:23 | thd: sholdn't be ... I've got a fedora box it's running on fine | |
16:24 | slef | there are two things called RSS, some confusion marketing and an april fool's joke gone wrong |
16:24 | kados | slef: breaks? |
16:24 | slef: right ;-) | |
16:25 | slef | kados: sometimes an XML processor that doesn't know about RSS-2's special requirement will not make it the default namespace and suddenly most RSS-2 tools don't recognise it. |
16:26 | kados | slef: so we'll have to avoid that then ;-) |
16:26 | slef | FWIW, I think the idea of a federated search is a good one. |
16:26 | kados | slef: the power of using XML for returning results is that I can do anything I want with it |
16:26 | with some standardization | |
16:26 | slef | This implementation scares me 3 ways though. Haven't the library and information scientists cooked up one based around RDF and Dublin Core yet? |
16:27 | kados | haven't heard of that |
16:27 | got a link? | |
16:27 | ahh ... you mean OpenILL (the other OpenILL?)? | |
16:27 | :-) | |
16:28 | slef | No, I don't know what's out there. I'd only got as far as researching CQL by today :-( |
16:28 | kados | yea ... they did ... but they haven't realeased a stitch of code in three years and implemented it in coldfusion anyway |
16:28 | so it's pretty worthless | |
16:28 | thd | slef: agreed, there are problems with poorly defined search queries that may work for one target but not others |
16:28 | kados | but with opensearch we can proxy _any_ z39.50 target |
16:28 | very easily | |
16:29 | slef | why "with opensearch"? Isn't it just "with a defined API"? |
16:29 | kados | slef: have you seen the demo? |
16:30 | slef | which demo? Your javascript one? |
16:30 | kados | yea |
16:30 | thd | kados: as long as the targets are all z39.50 that is good and every server should support z39.50 |
16:30 | slef | I don't have a build of links with javascript support handy. |
16:30 | kados | slef: let's talk about this after you've seen it (so we're on the same page) |
16:31 | does links support XMLHttp? | |
16:31 | slef | dunno |
16:31 | kados | won't work if it doesn't |
16:31 | slef | so, it's going to wait until tomorrow, when my eyes have recovered |
16:31 | kados | I'm all for text-based interfaces ... but you're insane ;-) |
16:32 | slef | no, my eyes are buggy, that's all |
16:32 | kados | :-) |
16:33 | OK ... meeting adjurned | |
16:33 | adjourned even ;-) | |
16:33 | thd | kados: everything should work in lynx, links, elinks if it can without client side javascript. |
16:34 | kados | thd: it's just a proof-of-concept ... |
16:34 | thd | kados: sorry humour :] |
16:34 | kados | :-) |
16:34 | slef | then it's just a beta... then it's just a first production roll-out... |
16:35 | paul | ;-) |
16:36 | almost midnight here. | |
16:36 | slef | why not start right? It's not like javascript is easy to write ;-) |
16:36 | kados | heh |
16:37 | thd | So, what are the difficulties to restoring subject linking where scince--methodoly links to science--methodoly but not science? |
16:38 | slef | The Library of Congress Portals Applications Issues Group http://www.loc.gov/catdir/lcpaig/ |
16:40 | kados | yea |
16:40 | that's just openurl stuff | |
16:40 | not really for ILL I don't think | |
16:41 | like I seid, CUFTS is an openurl linker | |
16:41 | and it's included in the portal as one of the result sets | |
16:41 | slef | I don't remember openurl |
16:41 | kados | openurl is a linking method for keeping track of subscriptions to various online stuff |
16:42 | journals, databases, etc. | |
16:42 | slef | there's some stuff there about federated searching |
16:42 | kados | right ... I'll take a look |
16:42 | but looks like just 'vendors' who provide federated searching | |
16:43 | not any standards for how to 'roll your own' | |
16:43 | slef | yep, standards page a bit thin |
16:43 | kados | which is what opensearch/openIll is |
16:44 | paul | ok, giong to bed now |
16:44 | slef | opensearch looks proprietary *shrug* |
16:44 | kados | ok ... meeeting over |
16:44 | thd | slef: openurl allows persistent access to the most appropriate copy of a biblio for the institution where the user is affiliated |
16:45 | paul | have a good day. |
16:45 | kados | nite paul |
16:47 | thd | slef: mostly used for accessing journal databases in academic libraries with many different databases rather than mostly consolidated by ebsco or proquest as at many public libraries with less need for openurl at present |
16:50 | I have done some work on a stand means for changing the base url for public and cross-institutional use otherwise the base only points to a fixed resolver, maybe not the one at your institution if you have found the openurl in a public place | |
16:53 | kados: are you still here? | |
16:53 | kados | thd: sort of |
16:54 | thd | kados: Why do all Open WorldCat searches have 'england' in the query string? |
16:54 | kados | thd: no idea ... take it up with OCLC ;-) |
16:54 | thd | kados: no this is only in koha |
16:56 | owen | Template bug |
16:57 | kados | thd: so where are you from? |
16:58 | thd | kados: agogme.com |
16:59 | owen: all templates or just npl | |
17:00 | owen | NPL is the only one with the WorldCat link |
17:00 | It's an old bug I forgot to commit the fix for | |
17:02 | thd | owen: do you know anything about the new bug where marc import fails when no isbn is present in the imported record? |
17:02 | owen | Sorry, my thing is templates, mostly. I don't know enough about imports to be able to help |
17:03 | kados | thd: so th is for thomas ... what's d for? |
17:03 | thd | owen: who does? I have had no answer on the devel list and the issue is critical for using koha to copy catalogue. |
17:04 | kados: you have not done a whois on agogme.com yet? :) | |
17:04 | kados | thd: heh |
17:05 | thd | kados: Dukleth |
17:05 | kados | right ... got it now ;-) |
17:06 | so what's your interst in Koha? | |
17:08 | thd | kados: well I am interested in all bibliographic automation systems and koha has added almost enough MARC support for me to use it at least for copy cataloguing. |
17:11 | kados: my interest is really much broader, considering the favourable directions the project is going. If it can query millions of records efficiently then I will consider developing with koha although I have been using zope for my projects experiments because of some nifty features that are difficult to implement in perl. | |
17:12 | kados | what's the project? |
17:15 | thd | kados: there are two three public paragraphs on agogme.com. Generally browse oriented information finding, concentrated on bibliographic records with extensions to other information domains. |
17:17 | slef | ILL (Interlibrary Loan) protocol (ISO 10160/1) |
17:19 | thd | kados: koha needs biderectional mapping for marc so any marc record imported can be modifyied and exported in marc communications format without data loss from the default framework. This requires a complete one to one mapping to be standard for every field subfield and indicator any record might ever have. |
17:22 | kados: the missing information can always be added to the framework by the user but when it is not standard an interested library ought to be very suspicious about koha despite its favourable direction. | |
17:22 | slef | actually, opensearch has prior art in plone, I'm pretty sure |
17:22 | heck, <isindex> is almost prior art ;-) | |
17:23 | thd | slef: prior art does not matter much if the cost of litigation expense is your real risk. |
17:24 | slef | who maintains the list of <link rel="XXX" .../> types? |
17:26 | thd | slef: I have researched those countries that may still be free from software idea patents to host a server once all the rich countries fall in the ip wars |
17:27 | slef | thd: hello Angola? |
17:28 | thd | slef: costa rica looks like the best option from the US |
17:30 | slef | Could do cool auto-discovery things with <link rel="index" type="application/rdf+xml" href="/path/to/xmlsearcher" /> telling you to try /path/to/xmlsearcher?querystring |
17:32 | thd | slef: for <link rel="XXX" .../> types do you mean for opensearch only or generally? |
17:33 | slef | generally... found them in www.w3.org/TR/html401 |
17:35 | thd | kados: Why isn't complete marc part of the standard install for koha? |
17:35 | slef | oh my |
17:35 | * OpenSearchDescription - The root node of the OpenSearch | |
17:35 | Description document. | |
17:35 | + Note: the xmlns attribute must equal | |
17:35 | http://a9.com/-/spec/opensearchdescription/1.0/ | |
17:35 | I think that means you can't construct an opensearch which returns opensearches. | |
17:35 | ...which is quite funny to me. ;-) | |
17:36 | chris | which MARC ? |
17:36 | kados | slef: that's just semantics |
17:36 | slef: I don't give a rats ass what the root node says | |
17:37 | slef: what I care about is coming up with a really great federated search | |
17:37 | slef | kados: so how do you have an opensearch which returns a list of opensearches? Define a new namespace iCantBelieveItsNotOpensearch? |
17:37 | kados | slef: yep |
17:37 | thd | slef: all but MARC21 and USMARC are a larger market to start with |
17:37 | slef | What's the hard part of this problem? |
17:38 | kados | slef: there's nothing hard about it |
17:38 | slef: it's quite easy really ... | |
17:38 | slef | I can see why CQL could be useful at this level |
17:38 | kados | slef: yep ... withing the query term you us CQL |
17:38 | slef | but I don't see what opensearch gets you over HTTP GET, apart from breaking XML. |
17:39 | kados | slef: if a server supports the OpenIll CQL namespace |
17:39 | you can't break xml | |
17:39 | that's the point! | |
17:39 | what good is XML if I can'd define how to use it? | |
17:39 | how do standards get written in the first place? | |
17:39 | I'm sick of following the leader and ending up with shitty library interfaces etc. | |
17:40 | slef | rss-2 conflicts with various other XML specs (including OpenSearchDescription, apparently) |
17:40 | kados | so what? |
17:40 | slef | let's use XML that doesn't conflict, like RDF |
17:41 | kados | I don't really see how in practical cases using rss-2 will cause any problem |
17:41 | thd | slef: www.w3.org/TR/html401 is a table of contents page. Which section is relevant? |
17:41 | slef | so why would anyone want to do that? It's not like it's hard to find free software XML parsers that handle namespaces |
17:41 | thd: "Links" sorry | |
17:41 | thd: and then -> rel -> link-types | |
17:42 | kados | slef: let's continue this on-list |
17:42 | slef | kados: this sounds like you not seeing how in practical cases using javascript will cause any problem ;-) |
17:43 | kados: *sigh* will it become terribly polarised? I just don't see what opensearch brings and you don't seem to express it. | |
17:55 | thd | slef: w3.org has the syntax standard I thought you had found a list of standard implementations for the relation attribute. |
17:55 | slef | no, just the standardised contents |
17:58 | thd | slef: the relation attribute supports multiple values but there has been a problem with some blogging software overwritting the relation attribute with a 'nofollow' value without preserving the original values as part of an anti-link spamming measure. |
18:00 | slef: the funny part is that overwriting the relation attribute breaks the use of some blogging software micro formats that use the relation attribute when they appear in comments. | |
18:05 | kados | slef: opensearch does three things: standardizes ILL with the OpenIll namespace; opens up Koha catalogs to all opensearch portals; brings live search results RSS feeds to Koha |
18:12 | slef | kados: the OpenIll namespace is (should be?) seperate; are there many opensearch portals?; connecting to searches should be done anyway, through html link or RSS-1 textinput. |
18:14 | http://purl.org/rss/1.0/modules/search/ | |
18:14 | thd | slef: what is the practical implication of the opensearch conflict with xml? |
18:15 | slef | thd: it conflicts with some possible searches. |
18:16 | as in, some types of searches need ugly workarounds... it's totally unnecessary to do that in xml. xml is meant to be extensible. | |
18:16 | thd | slef: with the results returned or the query? |
18:17 | slef | um, the results can't be expressed, basically |
18:18 | say I have a search engine search engine, which searches for a matching OpenSearchDescription | |
18:19 | actually | |
18:23 | that case is actually solvable, but not obvious | |
18:24 | so, say I have a search engine of RSS-2 feeds... I can't return any channel details in the results because they're not in a namespace | |
18:28 | thd | slef: I am somewhat confused about the use of namespace in the discussion |
18:30 | slef | namespaces canonicalise tags, similar to module hierarchies in perl - "is this $Version $::Version or $DBI::Version?" |
18:32 | thd | slef: So the channel data is undefined in your example? |
18:32 | slef | there's no way of saying where the tags to describe it come from, unless we define some as a workaround |
18:35 | thd | slef: I do not have enough rss background to appreciate the problem fully. I know I am not really with the 21st century unless I know rss ;) |
18:38 | slef | there are probably other cases where this breaks, but I'm not 100% sure... encryption seems a likely one |
18:39 | this is xml and including objects in each other | |
18:41 | the rss problem is mainly that there are two: RDF Site Summary (RDF is nice and librarians seem to like it, which I think is promising) and Really Simple Syndication (a mix of ideas from Channel Description Format and RDF Site Summary with marketing chutzpah mixed in) | |
18:42 | RDF Site Summary was RSS 0.9 and then the first Really Simple Syndication was released as RSS 0.92 | |
18:42 | RDF Site Summary was updated with new modules (Dublin Core!) to become RSS 1.0 | |
18:43 | and then the next Really Simple Syndication was released as RSS 2.0... | |
18:43 | ...so developers looking to quickly add RSS support add the non-XML/RDF one :-/ | |
18:43 | thd | slef: So how did rss 0.92 and later versions come to be developed in a non-standards compliant manner. |
18:46 | slef | They were produced essentially by Dave Winer, the djb of the Semantic Web. 0.92 backed by Userland Software - can't remember whether 2.0 was released before Dave Winer moved it to Harvard or not. |
18:46 | uh, do you know about djb? Basically, ignore the parts of standards that you don't like ;-) | |
18:48 | thd | slef: excuse my igorance. What does djb stand for? |
18:49 | slef | Daniel Bernstein, developer of the (not free software) qmail and djbdns |
18:49 | http://cr.yp.to/ IIRC | |
18:51 | thd | slef: So rss 2.0 cannot carry some types of xml files? |
18:54 | slef | It can't carry some (including itself) but also it cannot be combined at all with ones doing the same bad practice as itself and cannot be processed with some standards-compliant XML tools (but mostly they are adding workarounds for this sort of stunt). |
18:55 | basically, imagine writing a large perl system all in the global namespace | |
18:55 | yes, it used to be done and can still be done, but most people don't do it any more | |
18:56 | that is, a large perl system without using modules at all | |
18:56 | thd | slef: your perl analogy is clear :) |
18:57 | slef | mmm, maybe I should write that up |
18:57 | not thought of that one before | |
18:57 | by the way, http://www.thewalks.co.uk/makerss.rc if you like fun shell script | |
18:59 | thd | slef: yes write it down before writing it up. Most peoples best thoughts are forgetten. At least there is a log here :) |
19:02 | slef: no dns for http://www.thewalks.co.uk/ | |
19:03 | slef | hrm? |
19:05 | worksforme and registration looks ok | |
19:09 | thd | slef: maybe my isp does not want me to see this. |
19:10 | slef | they're probably in cahoots with logging companies, so block pro-tree sites ;-) |
19:11 | rach | hmm - I just visited a site who've got 2.2.2 on windows xp I think, and I believe it's not saving the item data |
19:11 | ahhh I have just read chris's e-mail | |
19:12 | chris | probably the no stop words |
19:13 | rach | ah yes |
19:13 | chris | if its internal server erroring anyway |
19:13 | rach | no it's not doing that |
19:13 | it doesn't come up with an error at all - it's a bit odd actually | |
19:13 | chris | no idea then, nothing in the error logs? |
19:13 | rach | so we go and add a biblio, and it's up to number 24, and it adds a group, but not the item |
19:14 | I'll pop back when I'm out this afternoon and see if the stop words thing is it, and if not, I'll go down the error logs route | |
19:14 | chris | k |
19:19 | thd | slef: So the difficulty is that rdf 0.9 / 1.0 lost the marketing battle to rss 0.92 / 2.0? |
19:20 | slef: What support is there for ILL over rdf 0.9 / 1.0? | |
19:21 | slef | well, it's still going on, but it seems bleak... I would hope that librarians of all people would appreciate the benefits of namespaces and rdf |
19:21 | and it's rss 0.9 / 1.x | |
19:21 | RDF is a more general bunch of tags | |
19:24 | Not sure about ILL support. It might need developing. There's already taxonomy and search support for years now. | |
19:26 | It sounds like someone's working on ILL support in XML anyway... ;-) | |
19:26 | thd | slef: Which someone? |
19:28 | slef | kados? |
19:31 | thd | slef: I assumed he was working with an aready existing standard. I guess I am forgetting something. |
19:35 | slef: I have a significant background in the book trade. X12 format XML is used for a book trade ordering standard in the US. Perhaps that could be adapted or extended for ILL. It would be nice for one format to be used for both orders and loans. | |
19:36 | slef: Then the US would just need to persuade the rest of the world to adopt X12 extended :) | |
20:33 | kados | slef: I see your point about RSS 2.0 vs. RDF |
20:33 | slef: I'll do a bit more research about the issue when I get back from ALA | |
20:34 | slef: (right now that's pretty much taking up all my personal time) | |
00:31 | rach | you have personal time kados? |
00:31 | kados | heh |
00:31 | rach | well of course not right now :-0 |
00:31 | kados | :-) |
00:31 | rach | so are you excited about going off to ala? |
00:31 | kados | pretty excited |
00:32 | also a bit nervous | |
00:32 | rach | was the box any use to you? |
00:32 | kados | on the open-source front it'll be us and indexdata |
00:32 | rach: absolutely | |
00:32 | rach | cool :-) |
00:32 | kados | thank you very much |
00:32 | did chris show you our brochures? | |
00:32 | rach | nope |
00:32 | kados | brochure |
00:33 | well ... since you've got bandwidth issues better ask him for it -- it's quite large | |
00:33 | http://liblime.com/liblimebifold.pdf | |
00:33 | in case you don't care ;-) | |
00:33 | rach | :-) |
00:33 | it's here | |
00:33 | kados | actually, that's not the final revision ... |
00:33 | rach | we don't have bandwidth issues all the time |
00:34 | chris | rach is ok, they are on a flat rate plan :) |
00:34 | kados | right ... well I meant having to pay and all that |
00:34 | ahh ;-) | |
00:34 | rach | it's just a bit erratic |
00:34 | oh yeah, no money issues, that's what it can be erratic :-) | |
00:34 | chris | its just us poor saps in the burbs that have to pay :) |
00:34 | kados | hehe |
00:34 | I'd love to get your reaction to the brochure | |
00:35 | (two problems on it that we fixed in the final proof 1) layer prob with one of the blurry opensearch proxy images and 2) on the outside-backside there's a square around the Koha logo | |
00:36 | other than that it's pretty much the same | |
00:39 | rach | looks good |
00:39 | looks to be fully buzzword compliant :-) | |
00:39 | chris | :) |
00:39 | kados | hehe |
00:39 | rach | although I don't see XML in there |
00:39 | kados | it's under RSS |
00:39 | :-) | |
00:40 | rach | a slightly odd hyphenation - in-teroperability |
00:40 | kados | yea ... too late to fix that now ;-) |
00:40 | I noticed it on the proof | |
00:41 | rach | ah well next time :-) |
00:41 | kados | yep |
00:41 | rach | and a turn of phrase that is a bit odd to my "ear" but may be how you'd express it |
00:41 | kados | what's that? |
00:42 | rach | "we founded liblime to meet your vendor needs on open source |
00:42 | "we founded lib lime to meet your needs for an open source vendor" | |
00:42 | kados | yea ... that would be better |
00:43 | rach | I think it's the "on" |
00:43 | kados | it's kinda ambiguous too |
00:43 | do vendors have needs? | |
00:43 | or do the librarians have need of vendors ;-) | |
00:43 | rach | ? |
00:43 | well you're saying they do :-) | |
00:43 | kados | right |
00:43 | or I'm saying tha tthey're vendors ;-) | |
00:44 | indradg | kados, nice job.... how big does this think print in hardcopy? |
00:44 | rach | yeah which is wrong, they aren't the vendors |
00:44 | kados | indradg: glad you're around ;-) |
00:44 | indradg: thanks ... it prints at 8.5/11 in | |
00:44 | indradg: how's the livecd coming? | |
00:44 | rach | unless you're actually supporting other vendors - rather than the liabraies? |
00:44 | kados | hehe |
00:44 | indradg | kados, i need to check out on that... was away from city for the last 36 hrs... just got back |
00:45 | kados | gotcha |
00:45 | rach | ah and so - the next sentance is a little negative |
00:46 | we make it possible for libraries like yours to use OS software like koha, by providing outstanding support and training for your existing staff | |
00:46 | ie - you don't need to hire new people | |
00:46 | and you don't need to feel like a looser cause you can't do it yourself :-0 | |
00:46 | kados | he |
00:47 | rach | this stuff is hard tho |
00:47 | kados | the final proof has: |
00:47 | We make it possible for vendor-reliant libraries to use open-source software--like Koha--by providing them with outstanding support and training options. | |
00:47 | rach | yep |
00:47 | indradg | that sounds mucho better! |
00:47 | rach | it's the vendor reliant that I thought might get a few backs up |
00:48 | kados | huh |
00:48 | indradg | rach has a point |
00:48 | kados | I don't quite see that tone |
00:48 | rach | maybe it's cultural :-)_ |
00:48 | kados | maybe it's an american thing |
00:48 | or maybe I've just been looking at it too long ;-) | |
00:49 | indradg | rach, i agree... over here that line wud spell to some ppl "we think we understand ur job better thanu do" |
00:49 | kados | so how would you put it rach? |
00:49 | rach | yep - vendor reliant I think would have a negative connotation here - umm, reliant meaning sort of tied to |
00:49 | kados | hmmm ... i'll have to ask my librarian friends ;-) |
00:50 | rach | yeah |
00:50 | it's like "used car salesman reliant" | |
00:50 | kados | hehe |
00:50 | rach | as everyone hates their vendors as well :-) |
00:50 | kados | yep |
00:51 | rach | well the first line is "personal" says your |
00:51 | but the next line changes focus and is back out to "other libraries | |
00:51 | so you could just keep it personal - so first line, we've established they need an OS vendor | |
00:52 | (and if they don't they will stop reading :-) | |
00:52 | kados | :-) |
00:52 | so instead of "But lack of vendor support has made it impossible for many libraries to benefit" | |
00:52 | rach | ah no that's fine |
00:52 | so you start out general, in first para | |
00:53 | setting out the problem | |
00:53 | then make it personal - you have had this problem | |
00:53 | then next sentance needs to still be personal - now you don't have to have this problem | |
00:53 | kados | i don't get it |
00:53 | :-) | |
00:54 | in my mind it reads: | |
00:54 | you've got this problem | |
00:54 | we can help | |
00:54 | we're different because | |
00:54 | we use open source | |
00:54 | you 've heard about open source | |
00:54 | but probably aren't using it | |
00:54 | we can help you use it | |
00:54 | here's how we help | |
00:55 | here's why open source rocks | |
00:55 | rach | ah I read - (starting at open source is the difference) |
00:56 | kados | I think I see what you mean now |
00:56 | rach | open source is cool, but has been hard to get into. We offer services to *you*. We offer services to other libraries who are vendor dependent |
00:57 | I want to keep going with "you" | |
00:57 | kados | right |
00:57 | yep ... that would be better | |
00:57 | damn ... should have had you look at this last week ;-) | |
00:57 | next time ;-) | |
00:57 | rach | :-) |
00:57 | kados | any comments on layout/graphics? |
00:58 | rach | nice use of people |
00:58 | it's quite busy, but that's pretty normal I think | |
00:58 | (and quite american :-) | |
00:58 | kados | heh |
00:58 | rach | so prolly good for your audience |
00:59 | kados | yea ... the NZ stuff doesn't fly here as well ... folks are used to pushyness ;-) |
00:59 | rach | yep |
01:00 | I'm less a fan of the egg with the green middle as a logo, I like the newer one but it works with the girl with it in her hand | |
01:01 | kados | hmmm ... i actually like the older one better ;-) |
01:01 | rach | :-) |
01:01 | maybe I've seen it too often :-) | |
01:01 | kados | if we're not careful version 3 might just be a star trek communicator ;-) |
01:02 | rach | :-) |
01:02 | did I see someone offering to do a klingon translation? | |
01:02 | kados | heh |
01:29 | thd | kados: how does your marketing distinguish yourself from other companies wearing the open source banner in a small way while their core product is closed source? |
01:34 | kados | thd: I don't really understand the question |
01:35 | rach | don't stay up to late :-_ |
01:37 | chris | i think he means, there are a bunch of companies who claim to use opensource, but only do in a very small way |
01:38 | thd | kados: I do not have a specific reference but I have increasingly seen companies such as ILS companies announcing some small open source component but you have to license their proprietary system for it to do any good. |
01:39 | kados | hmmm ... well LibLime doesn't have any proprietary systems |
01:39 | chris | yep, i think that he was saying you should make that point |
01:39 | thd | kados: exactly |
01:39 | kados | ahh |
01:43 | thd | kados: Other examples are OCLC open sourcing some outdated software while the current version is closed source and then they prohibit public use detailed DDC hierarchies for their expressed fear of other libraries taking the DDC without paying a license fee. |
01:44 | kados | yep |
01:45 | OCLC is good at that ;-) | |
01:50 | thd | At least many companies, even OCLC, are little friendlier to open source and Index Data licensing terms are now friendly where formerly they required a fee based commercial license for commercial use. |
01:52 | indradg | kados, we are having some problem with the mysql server permissions on the liveCD.... we are trying to figure it out.. hopefully it will be ready before u leave for ALA |
01:52 | kados | indradg: what kind of problems? |
01:52 | indradg | /var/lib/mysql getting owned by root |
01:53 | instead of mysql user | |
01:53 | kados | indradg: ahh ... |
01:53 | indradg | apache is running fine though... so hopefully we'll have it worked out soon |
02:02 | kados | chris I've been thinking about grepping lexile scores from http://www.lexile.com and displaying them in the opac |
02:03 | thd | kados: Were you planning to work with OpenILL for your ILL idea? They announced moving to PHP following by a code release in January. |
02:04 | kados | thd: right ... well i'll believe it when I see it |
02:04 | they've been in production for over two years | |
02:04 | and no releases yet | |
02:04 | plus they deployed on cold fusion | |
02:04 | which doesn't bode well for porting to php | |
02:06 | thd | kados: they offer services based on their cold fusion implementation but no code. |
02:06 | kados | yep |
02:08 | thd | kados: then you have independent intentions as there is no FOSS ILL system yet? |
02:08 | kados | thd: yep |
02:09 | not that independent though | |
02:09 | thd | kados: meaning? |
02:09 | kados | the other major open-source ILS, Evergreen will also support the new Openill |
02:09 | I'm working with Mike Rylander | |
02:09 | to develop the new namespace for openill | |
02:09 | (we may rename it) | |
02:10 | opaul | koha is a 24/7 project. |
02:10 | when paul awakes, joshua is almost going to bed. | |
02:11 | thd | kados: so the references I saw to mike and ill are not related to the existing Open ILL system? |
02:11 | kados | nope |
02:11 | not related at all | |
02:12 | thd | kados: I imagine you will need a somewhat different name to avoid a trademark conflict. |
02:13 | kados | yep |
02:13 | maybe 'freeill' or something | |
02:16 | thd | kados: I did get your javascript demo working and it looks nice. I may have not noticed the bottom of the screen change at first. I was afraid to repeat my attempt the first time to avoid some problem that might crash my x-windows session. |
02:17 | kados | glad you like it |
02:22 | chris | hmm lexile would be kinda cool for school libraries |
02:22 | there are a few in wellington using koha now (high schools) | |
02:28 | kados | yea ... it's just a matter of writing a little script to query the isbn search via POST and scrape the score |
02:29 | something I won't be having time to do before ALA ;-) | |
02:29 | (haven't done POST before ... GET would be pretty easy though) | |
02:30 | something like 600,000 images ;-) | |
02:30 | thd | reading level should be encoded in marc records already |
02:30 | kados | thd: where? |
02:30 | thd: in lexile score form? | |
02:31 | thd: do you know the tag/subfield it would be in? | |
02:31 | osmoze | hello |
02:31 | kados | I can check my data pretty quickly if you do |
02:31 | howdy osmoze | |
02:31 | thd | kados: in the form specified for marc. I will search marc bibliographic. |
02:32 | hdl | hi |
02:33 | kados | nothing in 526b or 521a |
02:33 | morning hdl | |
02:34 | paul | 'morning hdl. |
02:34 | did you recieve a gift this morning ? | |
02:34 | kados | morning paul ;-) |
02:34 | paul | (hdl waiting impatiently for a new computer...) |
02:34 | kados | ahh ... nice |
02:35 | paul | (you missed my 9:10 sentence it seems :-D ) |
02:35 | thd | kados: 521 - TARGET AUDIENCE NOTE |
02:36 | kados | 1 | Young Adult. | NULL || 4330 | 147 | 521 | 20 | 0 | a | 1 | 3.7 | NULL || 4331 | 147 | 521 | 20 | 0 | b | 2 | Follett Library Book Co. | NULL || 4332 | 147 | 521 | 21 | 2 | a | |
02:36 | there's some stuff in there | |
02:37 | thd | kados: http://www.loc.gov/marc/biblio[…]not1.html#mrcb521 |
02:39 | kados | bbiab |
02:39 | thd | kados: Of course you need a subscription to the loosleaf service or an online subscription for full docs. |
02:47 | jean | hi |
02:48 | paul | et voilà notre bon jean qui arrive, comme tout mercredi qui se respecte ;-à |
02:48 | thd | paul: What is the difficulty about reimplementing a search for subject subdivisions such as 650#0$aVocal music$zFrance$y18th century ? |
02:48 | paul | thd : not difficult to search everywhere (with see also parameter) |
02:48 | but the look is really poor. | |
02:49 | thd | paul: what about the links in the interface? |
02:49 | paul | where opac-detail.pl or opac-MARCdetail.pl ? |
02:50 | thd | paul: opac-detail |
02:50 | jean | :) |
02:52 | paul | you should open a bug on bugs.koha.org |
02:53 | thd | paul: a bug that will 'never' be squashed? |
02:53 | paul | no. |
02:53 | as i have a customer with "builded" subjects, so I have to find a solution to this problem | |
02:53 | ;-) | |
02:54 | thd | :-] |
02:57 | paul: and what about the missing marc fields in the standard framework distribution, especially the fixed fields? I cannot understand why the fixed fields would have been excluded except that they work differently from the others. | |
03:40 | paul: I just realised that Koha seems to have no means to preserve the order of subfields. 650#0$aVocal music$zFrance$y18th century would seem to become 650#0$aVocal music$y18th century$zFrance in Koha. In very many common and simple cases this problem would never be seen but it could occur in many fields. Am I missing something about how Koha stores data? | |
03:41 | paul | no |
03:41 | (you miss nothing) | |
03:41 | it's a limit of Koha 2.2 | |
03:42 | thd | paul: Was the original subfield order suppported prior to 2.2? |
03:42 | paul | no |
03:43 | thd | paul: Would using zebra correct for this upon importing a prexisting set of marc records? |
03:44 | paul | probably. |
03:47 | thd | thd: well I must sleep ++++ |
03:47 | paul | good night |
03:51 | 'morning francoisl | |
04:25 | slef | there's something about getting a reply in german that makes me laugh |
04:26 | I guess it's pretty rare. 9 times out of 10, I write in German and the reply comes back in English, which is fine, but seems backwards to me with both of us using second languages. | |
04:36 | paul | jean, tu as changé qqc dans le document "optimisation..." parce que celui que vient de m'envoyer flc ne contient pas plus de lignes sur mod_perl ? |
04:36 | ou alors il m'a envoyé une mauvaise version du document. | |
04:37 | jean/francoisl : à propos du PAQ et des pratiques Perl, il y a un document depuis peu sur www.kohadocs.org (tout à la fin) | |
04:37 | sur ce point. Il précise les règles et pratiques habituelles dans Koha | |
04:38 | mmm... pardon, il n'est pas (encore) sur kohadocs. | |
04:39 | il faut regarder dans les archives de koha-devel, mail de stephen hedges du 16 juin. | |
04:39 | intitulé 'draft (again) of coding guidelines' | |
04:48 | jean | Heu, je n'ai rien change au document et nous n'avons pas encore discute en interne des modifications a apporter au document |
04:48 | paul | ah, ok. |
04:48 | comme il me le renvoyait, je pensais qu'il y avait du nouveau ! | |
04:48 | jean | :) |
06:56 | slef | "A rat is being partly blamed for a major communications crash which has caused chaos in New Zealand." !?!? |
06:56 | I tell you, if we built houses as well as the internet, the first woodpecker would wipe out civilisation. | |
07:00 | "The Los Angeles Times, has temporarily ended its short-lived trial which gave readers the chance to edit its editorials on its website [...] they decided to end the trial early on Sunday after explicit photos were posted" | |
07:01 | Today's award for discovering the blindingly obvious goes to the LA Times. | |
07:16 | chris | heh |
07:17 | 2 breaks in telecoms 2 trunk fibres .. 300 kilometres apart within 3 hours of each other .. thats one fast rat | |
07:18 | paul | good sleep |
08:31 | Surprise for everybody : | |
08:31 | https://sourceforge.net/projec[…]release_id=336931 | |
08:31 | http://sourceforge.net/project[…]release_id=336931 | |
08:31 | kados | excellent! |
08:32 | hdl | Good! |
08:44 | slef | paul: did you merge bug 984's patch? |
08:44 | paul | checking... |
08:46 | slef | ok, wasn't in release notes, that's all |
08:46 | paul | i don't have announced some bugfixes that are impossible to understand for librarians. |
08:47 | (& that are minor from their point of view) | |
08:51 | slef | good sysadmins read release notes too |
09:34 | paul | hi owen. |
09:34 | owen | Hi paul |
09:35 | paul | (joshua was still here 1 hour ago, so don't expect him to be really good programmer today. Don't ask him anything important if you want my opinion ;-)) |
09:35 | owen | :D |
09:38 | I think he's about to leave for the American Library Association meeting in Chicago anyway | |
09:38 | We're going to have to get along without him for a while :( | |
09:38 | paul | right. |
09:39 | (in french we say : when the cat is out, mouses dances) | |
09:39 | owen | In English: When the cat's away, the mice will play |
09:41 | slef | mmm, "cat is out" can mean "cat is hunting" |
09:41 | paul | so, it's cat is away ;-) |
11:02 | tim | I was just trying to upgrade to 2.2.3 and the backup summary says it backed up 0 beblio entries, 0 biblioitems entries, 0 items entries and 0 borrowers. |
11:02 | but when I look at the backup file it seems to have everything. | |
11:04 | kados | owen: how's your network connection these days? |
11:05 | owen | It's been pretty good. |
11:05 | kados | cool |
11:05 | owen | No real crawling slow times, even with heavy use in the past week. |
11:05 | kados | I just realized that I never did hear back from intelliwave |
11:05 | great | |
11:05 | they must have found the problem | |
11:05 | owen | About the outage the other day? Or about the speed in general? |
11:05 | kados | and were too ashamed to admit what it was ;-) |
11:06 | speed in general |
← Previous day | Today | Next day → | Search | Index