Today | Next day → | Search | Index
All times shown according to UTC.
| Time | Nick | Message |
|---|---|---|
| 14:34 | gmcharlt | hdl: logbot for #koha is now back |
| 14:36 | hdl | thx gmcharlt |
| 14:55 | atz | shoot that horse. |
| 14:56 | fbcit | hrmm... out of disk space... apache has a 22M error.log |
| 15:01 | atz | out of HDD? |
| 15:02 | df -h | |
| 15:02 | fbcit | atz: seems to me 4.0 G should be enough for a stripped down install of debian? |
| 15:03 | it's definitely out of HDD | |
| 15:03 | I just can't figure out what's hogging it all | |
| 15:05 | gmcharlt | I claim DB rev # 062 |
| 15:05 | fbcit | hehe |
| 15:05 | atz | check /home/zvpaodfiqef/warez/DVD_rips/* |
| 15:06 | what partition is full? | |
| 15:06 | or do you only have 1 | |
| 15:06 | ? | |
| 15:07 | fbcit | / is full |
| 15:07 | I have 5 parts | |
| 15:08 | biblios:/# du -sh proc | |
| 15:08 | 898M proc | |
| 15:08 | biblios:/# du -sh usr | |
| 15:08 | 1.1G usr | |
| 15:08 | claim the most usage | |
| 15:09 | atz | proc doesn't make any sense |
| 15:09 | that's not even HDD | |
| 15:11 | fbcit | 1016K usr/share/groff |
| 15:11 | atz | on our dev server, /proc is 897M, so that seems about right |
| 15:11 | fbcit | what is 'groff'??? |
| 15:11 | atz | it's a pager, iirc, like nroff |
| 15:12 | a viewer for docs | |
| 15:14 | fbcit | atz: what does your /usr usage look like? |
| 15:14 | 'hates' even | |
| 15:15 | atz | are you running with DEBUG on or something? |
| 15:15 | our /usr is 1.3GB | |
| 15:15 | fbcit | not as I know of |
| 15:15 | so that looks normal as well | |
| 15:17 | atz | what's your biggest file(s) in /var/log/ ? |
| 15:18 | fbcit | 56K var/log/exim4 |
| 15:18 | 44K var/log/messages.0 | |
| 15:18 | 36K var/log/auth.log.0 | |
| 15:18 | 32K var/log/kern.log.0 | |
| 15:18 | 20K var/log/dmesg.0 | |
| 15:18 | 20K var/log/dmesg | |
| 15:19 | opps | |
| 15:19 | sorry... sort order was backwards | |
| 15:19 | 1.2G var/log/messages | |
| 15:19 | 1.2G var/log/syslog | |
| 15:20 | atz | that's pretty huge |
| 15:20 | fbcit | 8-O |
| 15:21 | atz | so log-rotate can't happen b/c you can't add the gzip file to the same part |
| 15:22 | so go ahead and gzip the logs to a different part, then remove the originals, | |
| 15:22 | then you can copy back the gz files | |
| 15:23 | log data is highly compressilbe | |
| 15:23 | *ible | |
| 15:23 | (low entropy, lots of repetition) | |
| 15:25 | fbcit | atz: got it and /dev/sda1 4.0G 1.6G 2.2G 42% / |
| 15:27 | tnx | |
| 15:27 | atz | np |
| 15:32 | fbcit | atz: I can't figure out why logrotate did not rotate those log files? |
| 15:33 | atz | are your cronjobs running OK? |
| 15:33 | cron is picky, and easy to mess up | |
| 15:51 | fbcit | cron seems fine |
| 15:52 | guess I'll have to keep an eye on it for a few days | |
| 17:22 | what flash elements are used on addbiblio.pl? | |
| 17:35 | atz | flash? |
| 17:35 | you mean in biblios? | |
| 17:35 | fbcit | gmcharlt: do multi volume works have MARC records for each vol or one record for all vols |
| 17:36 | atz. yep | |
| 17:36 | gmcharlt | fbcit: depends |
| 17:36 | fbcit | flash loads up when I go to add a new biblio |
| 17:36 | gmcharlt | for something like an encyclopedia, where the volumes don't have a separate title, one bib |
| 17:37 | for a multi-volume set, such as a series of scientific monographs where each volume has its own title, often each will get its own bib | |
| 17:38 | fbcit | gmcharlt: if all vols have the same LCCN I assume there would only be a single bib? |
| 17:38 | gmcharlt | yes |
| 17:38 | fbcit | tnx |
| 17:39 | gmcharlt | LCCN is unique per bib - if multiple bibs are originally catalogued by LC but also have the LCCN, that means that LC really, really screwed up |
| 17:41 | fbcit | atz: so are there flash elements in biblios? |
| 17:41 | atz | you'll have to ask ccatalfo, I don't have biblios on mine yet |
| 17:42 | i know he uses google gears which is my best guess for the flash uploader part | |
| 17:42 | fbcit | opps |
| 17:42 | not that | |
| 17:42 | atz | YUI might be involved too |
| 17:42 | fbcit | just addbiblio.pl as it currently exists |
| 17:42 | atz | no idea there |
| 17:56 | fbcit | like the source of acquisition truncates some data, the copy number disappears for starters |
| 18:00 | atz | yeah, the data does NOT make a round trip through editing w/o perturbation |
| 18:01 | encoding is still a problem | |
| 18:03 | gmcharlt: regarding tagging... so the tags refer to wherever the catalog data lives | |
| 18:03 | how many places is that? biblios is the obvious one | |
| 18:03 | but possibly biblioitems and even items also? | |
| 18:05 | gmcharlt | atz: I'd definitely ignored biblioitems |
| 18:05 | atz: I suppose item level tagging could be supported ("this is signed by Neil Gaiman himself!") | |
| 18:05 | but I think biblio-level *only* is sufficient | |
| 18:06 | atz | glad to hear it |
| 18:06 | fbcit | atz: so basically editing an item record will mess it up with the current state of things? |
| 18:07 | atz | I've seen some bugs reported... i don't know the current status |
| 18:09 | gmcharlt | fbcit: write them up please - item editing wasn't supposed to be this unstable by this point |
| 18:18 | fbcit | gmcharlt: the problem appears to be somewhere in the code that retrieve the item record and loads it into the form for editing... |
| 18:18 | I've opened a bug | |
| 18:18 | 1927 | |
| 18:19 | gmcharlt | fbcit: thanks |
| 18:20 | fbcit | I'll have a look for a minute to see if anything stands out |
| 18:21 | hrmm, additem.pl: DBD::mysql::st execute failed: Unknown column 'copynumber' in 'field list' at /usr/share/koha/lib/C4/Items.pm line 1752. | |
| 18:28 | the items.copynumber appears to be missing in any form | |
| 18:34 | for starters items.booksellerid is a varchar(10) which explains the truncation | |
| 18:34 | items.copynumber does not exist which explains the "dropped" copy number | |
| 18:35 | which is really not dropped, it's just never inserted to start with | |
| 18:36 | gmcharlt: any reason items.booksellerid should not be a varchar(255)? What if I'd like to enter a uri as the source of an item? | |
| 18:36 | s/a/an/ | |
| 18:40 | gmcharlt | fbcit: upon two minutes examination, looks complicated |
| 18:40 | because items.booksellerid, if it were the right type, might have been intended as a FK of acqbookseller | |
| 18:44 | but if the FK relationship is not intended, then varchar(255) or mediumtext would be OK | |
| 18:44 | ideally, you'd want both | |
| 18:44 | i.e., key to acq vendor record, if material was purchased via Koha's acq system | |
| 18:45 | and a freetext source of acquisitions field | |
| 18:45 | fbcit | I agree with the FK thought |
| 18:46 | but that is definitely broken at this point in any case | |
| 18:46 | I wonder if switching to a varchar(255)/mediumtext represents an acceptable transition to a total fix of both issues? | |
| 18:47 | gmcharlt | it might |
| 18:47 | fbcit | if so, I'll submit a patch to address the issues I noticed and file a bug on the other |
| 18:47 | gmcharlt | depends on how acqui populates items.booksellerid, and whether an existing code expects an implicit FK relationship |
| 18:48 | fbcit | there is not existing FK between the items table and the acqui tables on booksellerid |
| 18:48 | s/not/no/ | |
| 18:48 | gmcharlt | not an explicit one, no |
| 18:48 | fbcit | ahhh... I forget about software enforced relations |
| 18:48 | gmcharlt | I'm worried whether there's an implicit one that some code is trying to use or enforce |
| 18:49 | fbcit | as an addendum to your devel post: I think all relationships should be db enforced if possible |
| 18:49 | gmcharlt | although since aqbookseller.aqbooksellerid is an int(11) and items.booksellerid is varchar(10), most likely not (or if there is code, it is obviously broken :) ) |
| 18:50 | fbcit++ | |
| 18:50 | fbcit | exactly |
| 18:51 | gmcharlt | yep |
| 18:51 | fbcit | gmcharlt: so I'll submit a patch to fix my bug? |
| 18:51 | gmcharlt | patch for 1927, you mean? |
| 18:51 | fbcit | right |
| 18:52 | gmcharlt | sure; please CC me on the patch |
| 18:52 | fbcit | I think the acqui issue is more of a feature req |
| 18:53 | gmcharlt | yeah, expanding the size of items.booksellerid would fall more into an enh req |
| 18:54 | fbcit | alos |
| 18:54 | also, even | |
| 18:54 | it appears that the initial display of the item after adding it is based on form data rather than an actual query of the newly inserted record | |
| 18:55 | it seems to me that the display should reflect an actual query | |
| 18:56 | for that reason I missed this issue when adding only one item of a particular bib | |
| 19:01 | gmcharlt: you still holding claim to DB 062? | |
| 19:02 | gmcharlt | yes. it mine! mine! I tell you |
| 19:02 | :) | |
| 19:02 | if you take 063, while that will still produce a technical conflict for the RM to deal with, the merge will be easy to resolve | |
| 19:03 | fbcit | and rubs his hands together greedily... :-) |
| 19:03 | gmcharlt | and actually, I have a better idea - send your patch to me directly |
| 19:03 | I'll sign off, deal with the DBVer conflict, and send whole package to patches@ by tomorrow late morning | |
| 19:03 | fbcit | that will work |
| 19:04 | I'll not change kohaversion.pl and leave it to you | |
| 19:04 | gmcharlt | ok |
| 19:17 | fbcit | gmcharlt: you should have them |
| 19:25 | gotta run | |
| 19:25 | fbcit-away | bbl |
| 19:27 | chris | morning |
| 19:31 | nengard | chris: he's installing upgrades to my koha install :) |
| 19:32 | chris | ahh cool |
| 19:34 | nengard | chris - very cool! I'm getting biblios and some patches :) |
| 19:34 | well it's quitting time for me - so I'm off to clean the house :) | |
| 19:34 | ttyl | |
| 19:40 | I'm back | |
| 19:40 | got a favor to ask | |
| 19:40 | chris | yep? |
| 19:40 | nengard | Hi all - sorry for this blatantly off topic post - but the voting ends today (March 11) and I want to donate the prize money to Sheltie Rescue - so it's a good cause :) |
| 19:40 | http://www.bissell.com/redirec[…]_id=47118&Pet=767 - Beau | |
| 19:40 | http://www.bissell.com/redirec[…]_id=47118&Pet=762 - Coda | |
| 19:40 | Send this to your friends and family :) We only win a vacuum, but if we win the entire contest we get $5000 to give to the animal charity of our choice!!! | |
| 19:40 | Thank you!! | |
| 19:40 | chris - not just of you - of everyone :) | |
| 19:40 | chris | already voted |
| 19:40 | nengard | I know!!! :) THANKS :) |
| 19:40 | chris | hehe |
| 19:41 | nengard | but I want more votes and I have a huge community to tap into :) |
| 21:20 | fbcit | hi chris |
| 21:20 | hdl | gmcharlt, atz, Is there someone who copes with cataloguing. |
| 21:20 | ? | |
| 21:21 | gmcharlt | hdl: what's your question? |
| 21:23 | hdl | gmcharlt: I would like to know if we consider Normalizing UTF8 before storing elements. |
| 21:23 | (it could be important for diacritics : | |
| 21:23 | gmcharlt | hdl: what do you mean, specifically? everything should be in UTF-8 when it is stored in the Koha database. |
| 21:23 | hdl | é è î can be encoded in two different ways. |
| 21:24 | gmcharlt | hdl: are you referring to Unicode normalization forms, e.g, NFC, NFKD, etc.? |
| 21:24 | hdl | And if xml records are not normalized, it can end beeing a mess to find la bête. |
| 21:24 | gmcharlt: yes. | |
| 21:26 | gmcharlt | hdl: it would be a good idea for us to take some control over it |
| 21:26 | e.g., export NFKD for MARC records; use NFC for output to web browsers, etc. | |
| 21:27 | but given history, I think that any code in Koha that relies (or would be made more convenient by) a specific normalization form, should do the normalization explicitltly using the appropriate Unicode::Normalize routine | |
| 21:27 | and not rely on any specific NF being used in the database storage | |
| 21:28 | hdl | I agree. But this is also a pain for searches : |
| 21:29 | (hi js) | |
| 21:29 | since if you query é then you must be able to search for all forms of é in zebra. | |
| 21:30 | It can be coped with via mapping characters. | |
| 21:30 | gmcharlt | hdl: that's going to require a two-pronged approach, possibly |
| 21:30 | hdl | and maybe this is also a direction. |
| 21:30 | js | (hi all) |
| 21:31 | gmcharlt | ideally, it would be nice to get Zebra to be insenstive to a specific NF, since Zebra can do NF changes faster than Perl (or Perl XS) code can |
| 21:31 | if Zebra cannot be thus configured | |
| 21:31 | then I guess we'll need to stick on a specific NF to use for MARCXML records when they are sent to Zebra | |
| 21:32 | and then make sure that query strings are put in the same NF before being submitted to Zebra | |
| 21:33 | hdl | this makes sense. |
| 21:33 | gmcharlt | hdl: why don't you file a bug for this - will take a bit to research the Zebra options |
| 21:33 | hdl | And zebra can be configured so that it could be insensitive to NF. |
| 21:34 | Since you can define mapping characters. | |
| 21:35 | ..... unless it is a mapping character to character. | |
| 21:35 | But I think it is not thus. | |
| 21:38 | but this would be zebra specific. And all the search engines should have special configuration.... | |
| 21:38 | Anyway, let us think now for zebra, and then for others. | |
| 21:52 | fbcit | gmcharlt: any idea what enumchron.items is? |
| 22:00 | gmcharlt: you should have another patch adding another missing column to deleteditems | |
| 22:02 | hdl | gmcharlt: I am looking at the way items are stored. |
| 22:02 | gmcharlt | fbcit: items.enumchron = volume statement, e.g., "v.10 (2004)" |
| 22:02 | hdl | And I see that it uses XML records for storing unlinked fields. |
| 22:02 | fbcit | gmcharlt: I just added it to my db ver in updatedatabase.pl |
| 22:03 | gmcharlt | hdl: correct |
| 22:04 | hdl | But it doesnot uses C4::Context->preference('marcflavour') to generate this XML file. |
| 22:04 | gmcharlt | fbcit: in your deleteditems patch, the add of enumchron must be *before* copynum to preserve column order |
| 22:04 | hdl | So that it doesnot need field 100$a. |
| 22:05 | But decoding this xml piece, 100$a is required. | |
| 22:05 | gmcharlt | hdl: this is intentional - the XML snippets used in that column are (a) always UTF-8 and (b) always integrated into biblioitems.marcxml for indexing |
| 22:06 | hdl | gmcharlt: I would agree. If it wouldnot break on decoding those XML for UNIMARC for want of 100$a. |
| 22:07 | gmcharlt | hdl: if you have a bug, please report it |
| 22:08 | hdl | gmcharlt: I first want to analyse your process. And see what can be done to make it work for us. |
| 22:08 | Is this wrong ? | |
| 22:10 | gmcharlt | hdl: again, if you have a bug, please provide a test case and report it - I will be happy to provide more explanation of what I was up to, but your providing concrete information of what is breaking would really help |
| 22:11 | hdl | line 2020 : you write ; my $marc = MARC::Record->new_from_xml(StripNonXmlChars($xml), 'UTF-8', C4::Context->preference("marcflavour")); |
| 22:12 | So that you are using 100$a (for UNIMARC, to decode XML) and provide information for editing items. | |
| 22:12 | But when it comes to saving : | |
| 22:14 | marcflavour is not used. | |
| 22:16 | fbcit | gmcharlt: there are the patches to correct column order :) |
| 22:16 | hdl | So that items information are saved without 100$a but when decoding, it is required. |
| 22:17 | gmcharlt | hdl: probably marcflavour in _parse_unlinked_item_subfields_from_xml is not needed or could be a constant MARC21 |
| 22:18 | but this will need to be tested under both the MARC21 and UNIMARC options | |
| 22:18 | so I still think it would be in our best interests if a bug were filed :) | |
| 22:19 | hdl | gmcharlt: I will. But I donot like just to file bugs. I also want to be able to propose solutions, and even propose patches. |
| 22:28 | owen | hdl around? I know it's late... |
| 22:28 | hdl | still around owen. |
| 22:29 | owen | Hi hdl, I'm just now getting a chance to try your suggestions patch |
| 22:29 | I'm not sure I'm doing it right... git-apply <path to patch> ? Is there more to it than that? | |
| 22:29 | atz | if it applies OK, that's all there is |
| 22:30 | owen | And if it doesn't? :) |
| 22:30 | atz | then it will give an error message as to why |
| 22:31 | usually only held up by stuff like missing directories or files | |
| 22:31 | permissions, etc. | |
| 22:32 | owen | I see stuff about trailing whitespace, but that seems typical |
| 22:32 | atz | yeah, there is an option to turn off those warnings |
| 22:32 | for other code sets it might matter more | |
| 22:32 | owen | But no other error messages |
| 22:33 | hdl | owen ; has the patch applied ? |
| 22:34 | owen | 0001-suggestion-management-Improvements.patch:157: error: patch failed: koha-tmpl/intranet-tmpl/prog/en/modules/suggestion/acceptorreject.tmpl:46error: koha-tmpl/intranet-tmpl/prog/en/modules/suggestion/acceptorreject.tmpl: patch does not apply |
| 22:35 | atz | that usually means the version the patch started from and your current version don't match |
| 22:35 | if you have useless edits to that file, then you can do git checkout koha-tmpl/intranet-tmpl/prog/en/modules/suggestion/acceptorreject.tmpl | |
| 22:35 | and then try to reapply | |
| 22:36 | if you have useful edits, commit them first, then reapply | |
| 22:36 | owen | No, I don't have any outstanding changes to that file |
| 22:40 | atz | have you rebased recently? |
| 22:40 | owen | I just fetched and rebased and tried again, with the same results |
| 22:40 | How about you hdl? | |
| 22:40 | atz | hrm... |
| 22:42 | hdl | I tried to apply the patch on another git repository. |
| 22:42 | And it failed on the same error as you. | |
| 22:47 | gmcharlt | hdl: re your last to me - I understand completely - I can fall into the same trap myself |
| 22:49 | hdl | gmcharlt: sorry to bug you. But we also have to be able to describe the problem so that solution is ok for everyone. |
| 22:49 | gmcharlt | yep |
| 22:50 | althought back to the bug report issue, raising the issue via bugs.koha.org there does make it easier for other interested parts to find the problem description and contribute | |
| 22:50 | atz | it doesn't help them avoid a solution that is itself another bug :) |
| 22:53 | hdl | owen : it seems that a commit on the same template adding some ui for tabs line 46 is making the patch fail to apply. |
| 22:55 | fbcit | atz: heh |
| 23:04 | hdl | owen : I tried to rebase and had conflicts on that file. |
| 23:04 | Maybe I should send you the three files so that you can test. | |
| 23:18 | owen | hdl: that sounds good to me |
| 23:41 | Got the files, thanks hdl |
Today | Next day → | Search | Index