← Previous day | Today | Next day → | Search | Index
All times shown according to UTC.
Time | Nick | Message |
---|---|---|
12:03 | kados | shaun: so you installed a fresh version of Koha? |
12:03 | redid the DB and everything? | |
12:03 | I don't have time to troubleshoot that specific problem | |
12:04 | but I can help you uninstall your current Koha and start over | |
12:04 | you can probably do that in under 10 mins | |
12:04 | drop the koha database | |
12:04 | delete /etc/koha.conf | |
12:04 | and /etc/koha-httpd.conf | |
12:04 | delete from db where user like 'kohaadmin' | |
12:04 | delete from mysql where user like 'kohaadmin' | |
12:04 | run installer.pl | |
12:05 | (actually, grab 2.2.2b while you're at it | |
12:06 | sorry ... that second query is wrong ... should be | |
12:06 | delete from user where user like 'kohaadmin' | |
12:06 | both of those are after doing 'use mysql' in your mysql client | |
12:28 | shaun | sorry, i was away, back now |
12:28 | has 2.2.2b been rolled yet? | |
12:30 | it's all new - the database is a new one which I created a couple of days ago, each on a completely fresh install of mysql 4.1.11 on a completely fresh install of fedora core 3 | |
12:38 | owen | 2.2.2b: http://sourceforge.net/project[…]release_id=318874 |
12:40 | shaun | ah, great - what changes are there, without me downloading it? |
12:49 | does it include my fix to the returns page? :p | |
12:51 | owen | I wonder if paul created the 2.2.2b package without updating CVS? Is that possible? |
12:54 | shaun | i have to ask about the koha release cycle: when preparing for major and minor releases, do we get the cvs head, roll it, and then fix all of the known bugs on that so that release candidates are issued, or is a separate branch maintained for each release, with the new features being developed in head, and the fixes being ported to the release branch, or what? |
13:00 | owen | new features are developed in head, and bugfixes are in the rel_ branch. So current bugfixes go into rel_2_2. |
13:01 | shaun | so, paul rolled this release from the rel_2_2 branch, then? is this the same with new major releases (e.g. for 2.4, will there be a rel_2_4 branch? | when will it split?) |
13:03 | owen | Yes, this release should have been from the rel_2_2 branch. When 2.4 is released, the current stable branch will be rel_2_4 and HEAD will be for 2.6 (I guess) |
13:05 | shaun | (so our templates will make it into 2.4, if i commit them to head... nice) |
13:05 | owen | That's right. |
13:19 | shaun | I have a worry: if our templates are not the default in 2.2, and will not be in Koha at all until 2.4, yet we are basing our templates on the 2.2.2 release, we will have to backport all of the changes in the default templates in head between the 2.2.2 release and the time that 2.4 is being alpha'd - got any suggestions for getting around this? |
13:20 | owen | Keep up with updates. |
13:21 | Every time a change is made to a default template in CVS, make a similar change to yours. | |
13:21 | shaun | with the cvs mailing list, i suppose - what if the changes are incompatible with 2.2...? |
13:22 | owen | I'm not sure what you mean...If you want your templates to work in 2.2 and 2.4, you'll have to maintain two different versions. |
13:25 | shaun | not so much 2.2 and 2.4 - more 2.2 and head, as our templates came from 2.2, but updates to the non-template stuff (the perl backend) in head could be incompatible with the (stable and released) version of koha we are working on. |
13:26 | owen | Still... you're talking about maintaining two different versions. |
13:26 | shaun | is there much of a change in database structure between 2.2 and the current head? |
13:27 | owen | paul would know if you manage to catch him around here. |
13:28 | kados | shaun: do a dif on updatedatabase between 2.2 and 2.4 to find out |
13:29 | shaun | well - in the same way that there were major changes between 1.x and 2.0, would it be probable that there are any changes between 2.2 and 2.4? (i presumed you would know, as you are working at a library, and therefore have to manage the upgrades to some extent) |
13:29 | who is updatedatabase? :) | |
13:30 | is it a perl script? | |
13:30 | owen | /updater/updatedatabase |
13:30 | If there weren't any changes between 2.2 and 2.4, there wouldn't be a 2.4. We'd just keep calling it 2.2! :) | |
13:31 | shaun | i meant changes to the database specifically |
13:35 | ah, there are a couple of changes, but nothing that really affects me... | |
13:37 | so, if i work purely on head, how can i ensure that my installed copy is the latest, without rolling, untarring, installing and installing the templates and database again? | |
13:39 | owen | Get a fresh copy from CVS? |
13:39 | kados | shaun have you read the documentation on kohadocs.org? |
13:40 | http://www.kohadocs.org/Updating_Koha.html | |
13:40 | specifically that one? | |
13:42 | shaun | yes *promptly writes bash script which achieves effect of said document* |
13:51 | kados, do you have any idea when 2.4 will be out? i really need to demonstrate the system to the library/librarians which will be using it, but if 2.4 is a long way away, i will consider making a custom release of 2.2 with the new templates backported... unless our templates go straight into cvs, which, frankly, they are not ready for yet... | |
13:52 | stability is important, obviously... | |
13:52 | (brb) | |
13:52 | kados | not sure |
13:52 | owen | I think there hasn't really been much done in HEAD since 2.2 came out, so 2.4 isn't looking very close |
13:56 | And dont' forget, shaun, HEAD is for unstable stuff, so you can commit your templates any time, and keep updating as you go. It's up to you, though. | |
14:30 | shaun | *doesn't know what to do...* |
14:33 | owen | Look, if you want your templates to work in 2.2 *and* you want your templates to be part of the 2.4 release, you're going to *have* to maintain two different copies. Or upgrade your templates all in one go when 2.4 is released. |
14:34 | It's no fun, but no one said the job was easy! :) | |
14:47 | shaun | well, if I knew that 2.4 will be released before september, and has the amazing features we were talking about last week (from the argentinians - still don't know the full story...) then I would just commit to head, and make sure that all further template development (right down to the bugfixes) is done on our new ones |
14:49 | owen | We can't predict the future. |
14:49 | shaun | *wonders what else owen might be predicting, aside from the future :D* |
14:58 | owen | You're right, that was so redundant. I should have just said 'We can't the future.' |
15:00 | shaun | rofl, I can't the future either |
15:09 | kados | well ... unless you're submitting bugfixes you should be committing to HEAD anyway |
15:09 | rel_2_2 is just for bugfixes for 2.2 | |
15:10 | owen | (and *very* minor additions) |
15:14 | kados | check this catalog out: |
15:14 | http://search.lexpublib.org/ | |
15:14 | very fancy | |
15:14 | owen | Hmmm... I can't seem to connect |
15:15 | shaun | owen: it's a bit slow for me too, it loaded eventually... |
15:17 | don't see what's fancy about it... | |
15:18 | kados | I think the 'suggestions' feature is pretty neat |
15:19 | boy ... it sure is slow! | |
15:19 | shaun | yes, but it's at the expense of general usability, and speed... |
15:19 | kados | well ... it wasn't this slow last time I saw it |
15:19 | shaun | i take it they're not running koha on mysql 4 then ;) |
15:19 | kados | hehe |
15:19 | shaun | (burn 'em!) |
15:19 | kados | well actually ... |
15:20 | I've been doing a bit of reading on search methods | |
15:21 | I used to wonder what other databases there are besides Relational | |
15:22 | I'm starting to understand at least one other: textual | |
15:22 | owen | I like how for related terms to 'microtechnology' they offer 'Macbeth--Chronology' |
15:22 | kados | so hehe |
15:23 | so in the case of searching MARC records, rather than using a RDBMS we should be using a textual DBMS | |
15:23 | using textual indeces | |
15:23 | something like Lucene | |
15:23 | owen | Is that more like what Google does? |
15:23 | kados | yep |
15:24 | and every other sane approach to searching texts does something similar | |
15:24 | in effect we have two different databases | |
15:24 | one is a database 'about' books | |
15:24 | the MARC records | |
15:24 | shaun | meh... i know a little about implementing lucene in java - what apache forrest has taught me ;) - does anybody else have any idea how to implement it (somehow) in koha? |
15:24 | owen | Interesting...you'd have to program your indexer to give different weight to different areas of the MARC record depending on the search type. |
15:24 | kados | and one is a database 'of' books ... our holdings, status, etc. |
15:25 | I think PLucene is the answer as it's in perl ;-) | |
15:25 | shaun | well, yes... i'm not a perl genius (*yet*) |
15:26 | kados | RDBMS are not really designed to handle a textual database and so it's performance and accuracy are actually quite poor |
15:26 | owen: right | |
15:27 | you could actually do some really neat stuff with the indexer | |
15:27 | http://search.cpan.org/~tmtm/Plucene-1.21/ | |
15:28 | there's that clumsy 'it's' again ;-) | |
15:28 | owen | That Lexington catalog may be snazzy, but it's ugly as sin! |
15:28 | kados | ehe |
15:28 | yea | |
15:30 | http://www.perl.com/pub/a/2004/02/19/plucene.html | |
15:31 | shaun | are there any commercial/propietary ILS which do this out of the box? |
15:31 | kados | I'm sure most of the big ones do |
15:31 | which would explain why their searches are so damn fast even with HUGE datasets | |
15:32 | and even when those datasets include fulltext records | |
15:33 | (which is something else we could look ito providing for Koha) | |
15:33 | (if we went with PLucene) | |
15:35 | shaun | surely that'd be for a 3.0 release? |
15:36 | owen | Right, so...next Wednesday? Let's get on it, people! |
15:36 | shaun | ??? what happens next wednesday, may i ask? |
15:37 | kados | could be ... redesigning the search may not be very hard though |
15:37 | owen | Koha 3.0!!! We've got a lot of coding to do!!!11 |
15:37 | kados | from the research I've done so far Plucene is quite easy to use |
15:37 | hehe | |
15:38 | to do a 3.0 we'd have to do quite a bit of bugfixing too | |
15:38 | shaun | what bugfixing is this? |
15:39 | (i mean, what blockers are there?) | |
15:39 | owen | For 3.0? How about Bug 1235 -- "Plucene functionality still imaginary"? |
15:41 | ;) | |
15:45 | shaun | http://bugs.koha.org/cgi-bin/b[…]loc_type=allwords |
15:45 | substr&field0-0-0=noop&type0-0-0=noop&value0-0-0=&cmdtype=doit&order=%27Importance%27 | |
15:45 | long link ;) -- bugzilla needs a little bit of cleaning | |
15:46 | kados | owen: hehe |
15:47 | shaun | brb |
15:48 | kados | so for using plucene I think the thing to do is |
15:48 | export the marc records as xml | |
15:48 | using a free tool (we'll need to find one) | |
15:50 | then extract the meta tag for insertion into the indexer | |
15:50 | tags that is | |
15:51 | then modify our search code to grab bibids from plucene's indexes before passing it off to find out item-specific info, status, etc. | |
15:52 | chris around yet? | |
15:55 | maybe it would be better to use MARC::Record to extract data directly and insert it into the index via the indexer... | |
15:56 | shaun | "export the marc records as xml" -- when? with an index daemon/cronjob or at the time of searching, directly? |
15:56 | kados | see my revision ^ ;-) |
15:56 | this would happen as records were added/deleted | |
15:57 | so ... not very often | |
16:02 | shaun | wouldn't this be quite slow (particularly on pre-mysql 4.1 - *brings back performance topic*)? |
16:02 | kados | wouldn't what be slow? |
16:03 | shaun | searching, indexing or adding, particularly following a large marc import |
16:03 | kados | btw: plucene also supports variations and inflections of a word |
16:04 | well ... searching will be really fast since we won't be using mysql | |
16:04 | indexing and adding can be backgrounded ... it might take a while | |
16:04 | but it doesn't happen nearly as much as searching eh? | |
16:05 | shaun | why not mysql? will it be using xml entirely? |
16:06 | kados | no ... disregard my xml comment |
16:06 | shaun | (*very confused*) |
16:08 | please explain more, as i don't understand how we could not be using mysql, yet still searching the database | |
16:12 | ? | |
16:21 | kados | http://openstacks.net/os/index.xml |
16:21 | podcast on LibLime ;-) | |
19:11 | rach | russ? |
02:57 | hdl | hi |
03:40 | Sylvain | hi |
04:32 | ah, we may have a candidate in the next 2-3 weeks ... | |
04:32 | is seems interesting :) | |
10:25 | owen | paul... didn't there used to be a 'delete' button on the addbiblio screen? |
10:30 | hi shaun | |
10:31 | shaun | hi |
10:32 | still haven't got any book data :( (and ben seems to have gone off in a huff...) | |
11:03 | owen | Someone should build a Mozilla toolbar for Koha. You could put the whole navigation menu in and free up space onscreen. |
11:05 | hdl | what a killer feature. |
11:05 | owen | Anyone here know XUL? :) |
11:05 | hdl | No, but having some docs on it. |
11:05 | owen | For that matter we could probably build a whole Koha interface in XUL |
11:06 | shaun | i know a little (jack of all trades ;)) - i'll look into it |
11:06 | hdl | But then Koha wouldn't be that easily installable on multiple machines ;) |
11:07 | owen | True, but it'd still be cool B) |
11:07 | paul | hdl : could be a good idea for intranet opacs |
11:07 | shaun | | do you know a way of getting two installs on one box? the two installs seem to fight over /etc/koha.conf ;) |
11:07 | paul | yep shaun, very easy. I have at least 10 ;-) |
11:07 | just add : | |
11:08 | SetEnv KOHA_CONF /etc/ANOTHER_koha.conf | |
11:08 | into your virtual host | |
11:08 | (in both opac & librarian interface) | |
11:09 | note that the installer just check for /etc/koha.conf. | |
11:09 | so if you mv /etc/koha.conf /etc/koha2.conf | |
11:09 | you can install it again ! | |
11:09 | hdl | and also SetEnv PERL5LIB /usr/local/koha/modules |
11:09 | paul | yep, hdl is right. |
11:09 | shaun | the opac is what i am thinking - for my implementation (in a school), it would be useful to have a restricted, fullscreened firefox, with limited navigation - i can imagine it being good for self checkout |
11:09 | | k, tries that, thx | |
11:10 | paul | shaun : the self checkout already exists in Nelsonville |
11:10 | should be commited soon (isn't it owen ?) | |
11:10 | owen | Yes, self-checkout is in use at one of our branches. I think Joshua just needs to find time to clean it up and commit it. |
11:12 | hdl | paul : a propos de la visite médicale ? |
11:12 | shaun | i know it exists, what we would like to see in the school library is the restricted terminal (stops users from going to another site etc) |
11:12 | paul | (pas appelé, tant pis pour eux...) |
11:12 | hdl | Je viens ou pas. |
11:13 | s/./?/ | |
11:13 | paul | je réfléchis... |
11:13 | tu as bien avancé sur le dictionnaire ? | |
11:14 | hdl | Je vois à peu près les traitements pour la recherche. Mais je ne sais pas trop quoi faire des résultats. Une liste de checkboxes pour les intégrer à la recherche ? ou autre...? |
11:14 | paul | ok, alors viens, comme ca on fera ensemble. |
11:14 | le rdv est à 11H | |
11:15 | donc tu peux venir dans la matinée | |
11:15 | hdl | donc pas forcément aux aurores ;) |
11:15 | paul | pas la peine de te lever à 5H par contre ;-) |
11:15 | hdl | ok. |
11:15 | les embouteillages finissent à quelle heure ? | |
11:16 | ou bien ils ne finissent pas ??? | |
11:16 | paul | vers 9H30 tu dois être tranquille |
11:16 | hdl | ok. |
11:16 | shaun | DBD::mysql::st execute failed: Invalid default value for 'aqbudgetid' at scripts/updater/updatedatabase line 1061. DBD::mysql::st execute failed: Invalid default value for 'id' at scripts/updater/updatedatabase line 1061. |
11:16 | -- in installer.pl, when installing on mysql 4.1 - anybody else noticed this? | |
11:17 | paul | iirc shaun, you should also have a warning just before or after the update, saying "don't worry, it's not really a problem" |
11:17 | owen | Yes, paul, http://www.bigballofwax.co.nz is chris's |
11:19 | shaun | paul: there is no such message... |
11:20 | paul | mmm... anyway, i'm almost sure it's a problem you can ignore. |
11:32 | Sylvain | ohlala, je viens de répondre pour les règles de prêt et entre temps deux réponses, elle va être débordée de réponses la pauvre :) |
11:33 | bon on a tous 3 répondu à peu près pareil, ça va :) | |
11:33 | paul | ;-) |
11:33 | Sylvain | mais je pense que le pb qu'elle évoque mériterait réflexion |
11:33 | j'avais aussi pas mal galéré | |
11:34 | paul | moi, je ne considère pas ca comme un bug. |
11:34 | (ou alors il y a un truc qui m'échappe) | |
11:34 | si on met 0,0 dans une *, ca veut dire "pas de droit" | |
11:34 | Sylvain | oui |
11:35 | si on ne met rien ça veut dire on veut pas remplir cette case | |
11:35 | mais si c'est dans une colonne/ligne * | |
11:35 | ça veut pas pour autant dire 0,0 pour toute la ligne/colonne | |
11:35 | (enfin ça fait qques temps que j'avais rencontré le pb je suis plus exactement sur du pb) | |
11:35 | paul | donc ton idée serait qu'il faudrait mettre automatiquement le max sur les * s'il n'y a rien ? |
11:35 | (par exemple) | |
11:36 | Sylvain | bon, j'ai peur de dire des conneries, c'est vieux tout ça |
11:36 | attends | |
11:36 | paul | mauvaise idée à la réflexion : si on calcule automatiquement, on aurait "nb max = plus grande des valeurs. Or pour beaucoup, c'est la somme des valeurs) |
11:36 | 5 livres, 3 CD et 8 n'importe quoi. | |
11:37 | hdl_away | en fait, pardon d'intervenir en partance, mais c'est peut-être que tout le monde c'est peut-être la définiton des droit de tous les autres, à l'exclusion de ceux qui sont définis. |
11:37 | paul | par contre, pour les * par type de lecteur, c'est plutôt le plus grand, effectivement. |
11:39 | hdl | désolé, j'ai dit une bêtise. |
11:40 | Sylvain | ah, je viens de refaire mes petits tests |
11:40 | en fait, quand on a rien dans une case, c'est bien ça, il créé un enresgitrement avec le nombre de prêt NULL | |
11:41 | alors maintenant, pourquoi est-ce génant déjà ? je me rappelle plus | |
11:45 | bon bein je sais plus trop en fait :( | |
11:45 | mais bon, il me semble avoir entendu Pascale dire quelquechose a ce sujet, peut-être qu'elle va réagir sur la ML | |
11:45 | toujours est-il qu'il y a un truc quand mm :) | |
11:45 | sur ce, c'est l'heure de rentrer sous la pluie parisienne ... | |
11:45 | a demain | |
11:45 | paul | on va attendre et voir si Pascale et Carole réagissent |
11:45 | à demain | |
11:46 | (tu bosses sur quoi en ce moment au fait) | |
11:46 | (bon, on verra demain) | |
11:46 | ;-) | |
11:46 | sylvainOu | j'ai 2 secondes ;) |
11:46 | paul | hdl a commencé le "dictionnaire" |
11:46 | sylvainOu | en fait j'étais parti sur dautres choses et j'essaye de finaliser les commandes depuis le réservoir |
11:46 | paul | carrément touchy pour gérer à la fois les bibs avec listes d'autorité et celles sans. |
11:47 | (mais on a trouvé comment faire, ca va être mignon tout plein) | |
11:47 | sylvainOu | mais avec ces biblio biblioitems, OLD, new MARC .. |
11:47 | bon bein cool pour vous | |
11:47 | paul | en fait, on ne cherche rien sur biblio/biblioitems... |
11:47 | sylvainOu | ça va être intéressant ! |
11:47 | paul | on utilise uniquement les champs marc |
11:47 | avec l'API de Search.pm, on peut trouver ou chercher les choses. | |
11:48 | (restera à valider les performances, mais je suis pas trop inquiet) | |
11:48 | sylvainOu | ok |
11:48 | sur ce, c'est vraiment le départ | |
11:48 | ++ |
← Previous day | Today | Next day → | Search | Index