Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 RangerRick8 joined #gluster
00:12 RangerRick8 joined #gluster
00:29 \_pol joined #gluster
00:30 chirino joined #gluster
00:38 rcoup joined #gluster
00:53 gluslog_ joined #gluster
00:55 paratai_ joined #gluster
00:57 chirino joined #gluster
01:00 chlunde_ joined #gluster
01:00 morse_ joined #gluster
01:00 ThatGraemeGuy joined #gluster
01:00 edward1 joined #gluster
01:00 edward1 joined #gluster
01:01 rwheeler joined #gluster
01:01 krokarion joined #gluster
01:01 bennyturns joined #gluster
01:01 JoeJulian joined #gluster
01:01 atrius_ joined #gluster
01:02 RobertLaptop joined #gluster
01:02 T0aD- joined #gluster
01:04 abyss^_ joined #gluster
01:05 jones_d_ joined #gluster
01:06 mtanner__ joined #gluster
01:07 badone joined #gluster
01:08 kevein joined #gluster
01:10 premera_t joined #gluster
01:11 pull joined #gluster
01:13 mrEriksson joined #gluster
01:14 NeonLicht joined #gluster
01:17 gmcwhistler joined #gluster
01:19 bala joined #gluster
01:27 juhaj joined #gluster
01:29 vpshastry joined #gluster
01:29 js_ joined #gluster
01:37 thisisdave joined #gluster
01:51 joelwallis joined #gluster
02:12 joelwallis joined #gluster
02:28 chirino joined #gluster
02:36 aknapp joined #gluster
02:37 joelwallis_ joined #gluster
02:40 RangerRick8 joined #gluster
02:41 badone joined #gluster
02:44 semiosis int __f_unused
02:44 * semiosis wonders if that's used
02:46 semiosis error: 'struct statvfs' has no member named '__f_unused'
02:46 semiosis ah ok
03:22 bharata-rao joined #gluster
03:28 ccha joined #gluster
03:29 mohankumar joined #gluster
03:40 harish joined #gluster
03:58 ThatGraemeGuy joined #gluster
04:10 CheRi joined #gluster
04:14 semiosis @java
04:21 RangerRick9 joined #gluster
04:24 andreask joined #gluster
04:27 hagarth joined #gluster
04:27 vpshastry joined #gluster
04:39 sgowda joined #gluster
04:45 harish joined #gluster
04:51 xymox joined #gluster
05:03 deepakcs joined #gluster
05:07 mooperd joined #gluster
05:11 guigui1 joined #gluster
05:15 bulde joined #gluster
05:17 vimal joined #gluster
05:17 kshlm joined #gluster
05:19 piotrektt joined #gluster
05:24 shylesh joined #gluster
05:24 rastar joined #gluster
05:27 raghu joined #gluster
05:40 psharma joined #gluster
05:45 FilipeMaia joined #gluster
05:46 ricky-ticky joined #gluster
05:46 bala joined #gluster
05:51 shireesh joined #gluster
06:15 vpshastry joined #gluster
06:20 jtux joined #gluster
06:20 puebele1 joined #gluster
06:28 shruti joined #gluster
06:33 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
06:36 satheesh joined #gluster
06:39 rgustafs joined #gluster
06:40 puebele joined #gluster
06:45 CheRi joined #gluster
06:48 FilipeMaia_ joined #gluster
06:51 ekuric joined #gluster
06:51 vshankar joined #gluster
06:53 ctria joined #gluster
06:54 satheesh joined #gluster
07:03 kevein joined #gluster
07:08 ngoswami joined #gluster
07:10 pkoro joined #gluster
07:16 semiosis joined #gluster
07:22 mooperd joined #gluster
07:23 dobber_ joined #gluster
07:24 hateya joined #gluster
07:31 bala joined #gluster
07:45 VSpike joined #gluster
08:02 CheRi joined #gluster
08:18 VSpike Since the last few days, I have thousands and thousands of errors in my brick log on both servers, of the form:[2013-07-03 05:11:49.229206] E [posix.c:224:posix_stat] 0-gv0-posix: lstat on /mnt/brick0/.glusterfs/69/29/6929​52d6-fc58-4823-9bb4-9fda2d8a7475 failed: No such file or directory
08:19 VSpike Always in pairs with [2013-07-03 05:11:49.229250] I [server3_1-fops.c:1085:server_unlink_cbk] 0-gv0-server: 6455739: UNLINK <gfid:3596c7f2-5bc3-4454-9207-b05d091cca​0a>/00e204af-75a4-491f-b1ae-98ce3594106c (692952d6-fc58-4823-9bb4-9
08:19 VSpike fda2d8a7475) ==> -1 (No such file or directory)
08:19 chirino joined #gluster
08:19 VSpike I found a few questions like this on mailing lists, but no obvious answer. Any suggestions how I can resolve this?
08:27 anands joined #gluster
08:31 FilipeMaia joined #gluster
08:46 ekuric joined #gluster
08:48 tziOm joined #gluster
08:55 ollivera joined #gluster
08:59 VSpike Similarly, the glustershd.log is full of [2013-07-03 09:51:51.480557] W [client3_1-fops.c:592:client3_1_unlink_cbk] 0-gv0-client-1: remote operation failed: No such file or directory
09:02 rastar joined #gluster
09:05 baoboa joined #gluster
09:17 hateya joined #gluster
09:21 manik joined #gluster
09:39 shylesh joined #gluster
09:42 ramkrsna joined #gluster
09:42 ramkrsna joined #gluster
09:47 realdannys1 joined #gluster
09:48 realdannys1 Hmm ok, so I restarted my EC2 instance to a larger size yesterday to test FFmpeg (and it worked with more ram) straight away Gluster wouldn't mount on my client AGAIN (its so delicate!) I rebooted back to micro, still no luck. All the things were disabled that were caused problems before (IPtables) but wouldn't mount. I restarted the glusterd service and suddenly the volume is gone?? Is that normal??
10:01 spider_fingers joined #gluster
10:02 vpshastry1 joined #gluster
10:08 anands joined #gluster
10:09 ngoswami joined #gluster
10:11 realdannys1 It says no volume exist, but when I go to create a volume again it says that brick already belongs to a volume ??!!
10:32 lpabon joined #gluster
10:33 manik joined #gluster
10:33 vpshastry joined #gluster
10:34 glusterbot New news from newglusterbugs: [Bug 980838] nufa xlator's algorithm for locating local brick doesn't work if client xlators are not its immediate children <http://goo.gl/tmhqS>
10:47 anands joined #gluster
10:51 rastar joined #gluster
11:10 realdannys1 joined #gluster
11:12 duerF joined #gluster
11:13 harish joined #gluster
11:15 chirino joined #gluster
11:18 vpshastry joined #gluster
11:22 CheRi joined #gluster
11:35 rcheleguini joined #gluster
12:07 pkoro joined #gluster
12:18 CheRi joined #gluster
12:19 krokar joined #gluster
12:21 vpshastry1 joined #gluster
12:26 manik joined #gluster
12:35 jord-eye2 left #gluster
12:39 hagarth joined #gluster
12:47 semiosis chirino: mapped struct statvfs yesterday. it turned out to be much easier than i expected.
12:47 chirino yay!
12:47 semiosis hawtjni is pretty nice
12:47 chirino yeah only weird odd bit is doing memcpy if you want the struct on the C heap.
12:48 semiosis hmm, dont think that's necessary in this case
12:48 chirino coolio.  yeah just using as a temp stack value is really easy.
12:49 pkoro joined #gluster
12:50 semiosis i'm building libgfapi-jni in tandem with the nio.2 filesystem provider: https://github.com/semiosis​/glusterfs-java-filesystem
12:50 glusterbot <http://goo.gl/KNsBZ> (at github.com)
12:50 chirino ah.. yeah struct handling is another one of those areas where hawtjni might be easier than JNA, as with JNA you have to know how the C side lays out structs in memory.
12:50 andreask joined #gluster
12:50 chirino can get tricky once you start targeting multiple platforms.
12:54 semiosis i wonder what would happen if someone had a glusterfs volume larger than 16 EiB
12:54 recidive joined #gluster
12:55 chirino data singularity?
12:55 semiosis hahaha
12:55 semiosis actually that number is wrong
12:55 semiosis but there is a limit to what the statvfs struct can report
12:56 semiosis since it's blocks & block size of 4k, it would be 16EiB * 4kB
12:56 semiosis probably a question for avati
12:56 robos joined #gluster
12:57 hagarth joined #gluster
12:58 semiosis hmm, thats 65 ZiB, maybe i shouldn't worry about that
12:59 semiosis s/65/64/
12:59 glusterbot What semiosis meant to say was: hmm, thats 64 ZiB, maybe i shouldn't worry about that
13:00 ninkotech joined #gluster
13:00 ninkotech_ joined #gluster
13:01 bulde semiosis: easy to check that out with a small code change (in fuse_statfs_cbk() send all the -1 (ie, uint64 max) as reply (hardcoded) and see what df -h reports :-)
13:01 ngoswami joined #gluster
13:01 semiosis hi bulde!
13:02 semiosis i might give that a try tonight
13:05 aliguori joined #gluster
13:06 bulde semiosis: cool, let me know if you have issues, will be able to help you tomorrow (currently reviewing couple of geo-rep/quota patches)
13:07 semiosis thanks :)
13:07 aknapp joined #gluster
13:18 ngoswami joined #gluster
13:20 dewey joined #gluster
13:25 satheesh joined #gluster
13:30 manik joined #gluster
13:41 MarkR joined #gluster
13:42 jthorne joined #gluster
13:44 failshell joined #gluster
13:45 MarkR In http://www.gluster.org/category/ubuntu/, Matt Cockayne advices to create a /etc/glusterfs.vol on each client and use Upstart to mount volumes on Ubuntu 12.04 -- is this seen as good practice here?
13:46 fleducquede Hi guys
13:47 semiosis MarkR: sounds like a poor idea, but let me look into it further & get back to you in a bit
13:47 semiosis afk
13:48 MarkR Thanks semisis
13:49 fleducquede Is there any documentation about the multimaster geo replication that will be included in 3.4 ?
13:49 kaptk2 joined #gluster
13:50 yinyin_ joined #gluster
13:52 fleducquede Apparently, the feature has been dropped for 3.4, and postponed to 3.5 :S
13:52 fleducquede https://lists.gnu.org/archive/html/​gluster-devel/2012-12/msg00016.html
13:52 glusterbot <http://goo.gl/bltVo> (at lists.gnu.org)
13:53 VSpike My glustershd.log is full of [2013-07-03 09:51:51.480557] W [client3_1-fops.c:592:client3_1_unlink_cbk] 0-gv0-client-1: remote operation failed: No such file or directory and brick log is full of :[2013-07-03 05:11:49.229206] E [posix.c:224:posix_stat] 0-gv0-posix: lstat on /mnt/brick0/.glusterfs/69/29/6929​52d6-fc58-4823-9bb4-9fda2d8a7475 failed: No such file or directory ... thousands of them, on both
13:53 VSpike servers. Any idea how to resolve this?
13:53 ngoswami joined #gluster
13:55 fleducquede hi
13:55 glusterbot fleducquede: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:55 fleducquede Are tried to remove some file that were in .glusterfs directory ?
13:56 fleducquede have you tried to remove some file that were in .glusterfs directory ?
13:56 fleducquede because it may explain u see those messages
14:01 mohankumar joined #gluster
14:02 Savaticus joined #gluster
14:10 FilipeMaia joined #gluster
14:10 ngoswami joined #gluster
14:11 bugs_ joined #gluster
14:19 semiosis MarkR: my thoughts on that blog post...
14:20 semiosis 1. most importantly, using a volfile directly to make a client mount sacrifices all of the dynamic volume configuration features of glusterd & the gluster CLI.  it's strongly discouraged except where absolutely unavoidable.
14:21 semiosis 2. related, the author gives no explanation for why this method was chosen... what went wrong with the normal way of mounting a client?!  i sure hope a bug report was filed about that!
14:22 semiosis 3. his upstart job probably works but it's not done the "right way" -- which is to block the mounting event until some state obtains
14:22 semiosis see the mounting-glusterfs.conf job included in my package for example
14:22 semiosis (side note, i need to update that job, will do so later today)
14:34 MarkR semiosis: thank you for sorting this out -- Currently our client servers do not have a /etc/init/mounting-glusterfs.conf, it gets installed with glusterfs-server, not *-client or *-common.
14:34 semiosis right thats what i need to fix today
14:34 semiosis by the end of the day there'll be a new -client package which provides it
14:35 mooperd joined #gluster
14:40 MarkR semiosis, could you please direct me to a mounting-glusterfs.conf I can already use for mounting glusterfs volumes on Ubuntu?
14:40 failshell /etc/fstab?
14:40 failshell or did ubuntu remove that as well?
14:40 failshell they seem to like to remove stuff hehe
14:47 spider_fingers joined #gluster
14:47 spider_fingers left #gluster
14:49 semiosis MarkR: do you have one glusterfs client mount on the system or more than one?
14:51 semiosis failshell: it's complicated to explain & i'm busy... ping me later today and i'd be happy to go over it if you really care to know
14:51 ultrabizweb joined #gluster
14:52 MarkR semiosis: I'm new to glusterfs, so I might misunderstand how things work. I have 2 bricks (gluster servers), exporting a home mount point to 12 servers which mount the glusterfs
14:53 MarkR file1.cluster.peercode.nl:/GLUSTER-H​OME/homeglusterfsdefaults,_netdev00
14:53 drrk joined #gluster
14:53 semiosis MarkR: thanks
14:54 semiosis this should work for you: http://pastie.org/8106999
14:54 glusterbot Title: #8106999 - Pastie (at pastie.org)
14:54 drrk Hi, I have a quick question about gluster, on a machine with 48 disks, would that (sensibly) be 1 server with 46 bricks (2 disks left to be the OS)
14:54 bsaggy joined #gluster
14:54 drrk and then you replicate accross the bricks on one server.
14:54 * semiosis bbl
14:54 drrk can I then add another server setup the same, and have replication accross both
14:55 MarkR Upstart /etc/init/mounting-glusterfs.conf from https://launchpad.net/~semiosis/+archiv​e/ubuntu-glusterfs-3.3/+files/glusterfs​_3.3.1-ubuntu1~precise9.debian.tar.gz
14:55 glusterbot <http://goo.gl/Kn3Ct> (at launchpad.net)
14:56 ndevos drrk: rather use RAID6 and lvm on one server, and export smaller bricks, replicate the bricks to a 2nd server with bricks of the same size
14:56 ndevos s/smaller/less and bigger/
14:56 glusterbot What ndevos meant to say was: drrk: rather use RAID6 and lvm on one server, and export less and bigger bricks, replicate the bricks to a 2nd server with bricks of the same size
14:56 MarkR semiosis: ah, I'll try the pastie. The other version gave me a "The disk drive for /home is not ready yet or not present." during (re)boot
14:56 drrk ndevos: any reason?
14:57 semiosis MarkR: always use 'nobootwait' fstab option for remote filesystems.  see man fstab
14:57 ndevos drrk: it will be easier to replace one disk that way, replacing a failing disk in a raid-set is simple, replacing a complete failed brick is more difficult
14:58 drrk right, okay that makes sense
14:58 drrk how difficult is difficult though?
14:58 failshell semiosis: ill be around, ping me when you have time to explain. im curious.
14:59 ujjain joined #gluster
14:59 ndevos well, not impossible, but definitely not trivial - and that also means that you need to detect the failing disk, something a raid-controller would do for you
15:00 ndevos currently, glusterfs does not do hardware/filesystem checking, but I hope to add that with http://review.gluster.org/5176
15:00 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:00 drrk okay, thanks
15:00 drrk as for features not yet implemented, how far off is snapshotting?
15:01 * ndevos doesnt know
15:03 drrk thanks
15:03 drrk left #gluster
15:03 vpshastry joined #gluster
15:08 MarkR semiosis: I added nobootwait. I removed _netdev, as it gives me a unknown option _netdev (ignored)
15:09 MarkR semiosis: I added nobootwait, should be really in fstab indeed. Should I remove _netdev, as it gives me "a unknown option _netdev (ignored)"?
15:10 failshell yeah i removed _netdev as its not working with 3.3.x
15:11 jag3773 joined #gluster
15:15 sprachgenerator joined #gluster
15:24 plarsen joined #gluster
15:26 MarkR semiosis: with you pastie Upstart script and a defaults,nobootwait as fstab options, the glusterfs is mounted upon (re)boot). Thanks. The only thing bothering me is that now I get "mountall: Disconnected from Plymouth" which I don't think I got before. dmesg gives "init: plymouth-upstart-bridge main process (345) killed by TERM signal".
15:31 FilipeMaia_ joined #gluster
15:41 umarillian1 joined #gluster
15:41 umarillian1 left #gluster
15:41 Mo__ joined #gluster
15:42 aknapp joined #gluster
15:44 vimal joined #gluster
15:47 bala1 joined #gluster
15:49 _Bryan_ joined #gluster
15:58 sprachgenerator joined #gluster
16:03 CheRi_ joined #gluster
16:04 rhys_ joined #gluster
16:05 rhys_ i have 2 gluster peers in a raid1 style setup. i need to reboot one of the peers. how do i make sure that all clients are using the working peer/flushed/synced etc before rebooting?
16:05 zaitcev joined #gluster
16:10 rhys_ i guess i just want to stop the 1 brick. if i stop the 1 brick, will it shut down gracefully?
16:12 \_pol joined #gluster
16:15 \_pol joined #gluster
16:17 vpshastry left #gluster
16:20 rhys_ i've been looking for 'graceful brick failure' and haven't found much on the behavior of the clients
16:31 failshell it is possible to load data directly on a local disk of a brick, and then somehow trigger a replication?
16:36 lpabon joined #gluster
16:42 ryker joined #gluster
16:45 ryker hi.  Is there any documentation on the details of geo-replication configuration?  such as: configuring how often replication to the slave is configured, buffers, etc?
16:46 ryker everything i've found on the gluster site and searching google has been pretty generic
16:46 andreask joined #gluster
16:46 ryker i'm also having a hard time finding the detailed documentation for each translator
16:46 ryker could have sworn there used to be more detailed docs for the translators back pre version 3.x
16:48 ryker hah. just found the translator docs. http://gluster.org/community/doc​umentation/index.php/Translators
16:48 glusterbot <http://goo.gl/1HNRJ> (at gluster.org)
16:49 ryker ooh, a glusterbot
16:49 ryker hi glusterbot
16:49 ryker glusterbot: help
16:49 glusterbot ryker: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
16:49 m0zes ,,(meh)
16:49 glusterbot I'm not happy about it either
16:50 ryker glusterbot: commands
16:50 ryker m0zes: maybe gluster bot can help me? any idea how I get it to list commands? :)
16:51 ryker glusterbot: docs
16:51 glusterbot ryker: I do not know about 'docs', but I do know about these similar topics: 'dd'
16:51 ryker glusterbot: dd
16:51 glusterbot ryker: If you're testing with dd and seeing slow results, it's probably because you're not filling your tcp packets. Make sure you use a large block size. Further, dd isn't going to tell you how your cluster will perform with simultaneous clients or how a real load will perform. Try testing what you really want to have happen.
16:51 ryker glusterbot: geo-relication
16:51 glusterbot ryker: I do not know about 'geo-relication', but I do know about these similar topics: 'geo-replication'
16:51 ryker glusterbot: geo-replication
16:51 glusterbot ryker: See the documentation at http://goo.gl/jFCTp
16:52 ryker bad glusterbot.  that link doesn't work
16:52 m0zes :/ apparently that doc is out of daye
16:52 Technicool joined #gluster
16:52 ryker ,,(meh)
16:52 glusterbot I'm not happy about it either
16:52 ryker :)
16:55 jclift_ joined #gluster
16:56 rhys_ excellent
16:56 rhys_ stopping the gluster process destroyed a 30 gig qcow VM
16:58 hateya joined #gluster
17:06 samppah rhys_: damn, is it completely unusable?
17:06 samppah what process did you stop exactly?
17:08 rhys_ /etc/init.d/glusterfs-server stop
17:08 rhys_ restarted the box
17:09 rhys_ because according to the single and only message i found on the mailing list sending SIGTERM to the gluster process would properly fix things
17:09 rhys_ it would die gracefully
17:09 rhys_ it didn't. and now i have to restore a production machine
17:10 rhys_ no flush commands, no sync commands, nothing to tell it to go offline
17:11 samppah rhys_: what gluster version this is?
17:11 samppah and what kind of volume setup?
17:13 rhys_ 3.3.1 2 bricks, TCP, Replicated over 2 dedicated 10GBit NICs
17:14 rhys_ somehow wrote trash all over it. no more disklabel, no more partitions, nothing
17:15 andreask and its the same garbage on both bricks?
17:18 rhys_ replicated identically
17:18 rhys_ this was 1 vm out of 10.
17:19 failshell replication != backups
17:20 rhys_ failshell, as i'm restoring from backup, I appreciate you chiming in with that ultra helpful tidbit
17:21 andreask rhys_: qcow2 image format?
17:21 rhys_ yep
17:22 rhys_ i'd just like to know about why there is zero documentation on graceful node reboots and why it decided to eat my data
17:23 andreask kvm?
17:24 samppah rhys_: what was the actual reason for reboot and did you reboot both servers or just one?
17:25 rhys_ kernel security updates
17:26 rhys_ they are KVM hypervisors. moved all the VMs to one hypervisor, looked hard to try to do the same with the gluster bricks
17:26 rhys_ because both need to be rebooted for security updates
17:30 neofob left #gluster
17:32 samppah rhys_: glusterfs client data synchronously to all servers, so there is no need to change active servers..
17:33 samppah if server goes down then client waits for time that is specified in network.ping-timeout before it considers server to be down
17:34 samppah once server is back online again it copies changed bits from good server
17:35 rhys_ then my guess is that its not glusters fault directly
17:35 rhys_ its the KVM write cache
17:36 samppah is it possible that not all changed bits were copied before second server was rebooted?
17:36 rhys_ no. i stopped the service and waited a good long while
17:37 rhys_ KVM cannot use a fuse gluster mount directory. it errors on some operation
17:37 rhys_ the solution is to enable some kind of caching
17:38 samppah ah
17:38 samppah cache=writeback or something like that?
17:38 rhys_ writethrough i thought was safer
17:38 samppah i think so too, hmm
17:38 samppah rhel or something similar?
17:39 rhys_ Debian 6
17:39 sjoeboo joined #gluster
17:39 rhys_ Proxmox to be specific.
17:39 nightwalk joined #gluster
17:41 bulde joined #gluster
17:53 stickyboy joined #gluster
17:57 edong23 joined #gluster
18:06 MarkR joined #gluster
18:09 aknapp joined #gluster
18:10 MarkR Is it correct that if you only use gluster native filesystem (so no nfs), you don't need to open up portmapper udp/tcp port 111 on your firewall?
18:15 Technicool MarkR, yes i believe so
18:19 failshel_ joined #gluster
18:21 FilipeMaia joined #gluster
18:24 hateya joined #gluster
18:27 balunasj joined #gluster
18:30 balunasj joined #gluster
18:44 semiosis wb Technicool
18:49 FilipeMaia joined #gluster
18:54 krishna joined #gluster
18:54 bulde joined #gluster
18:59 MarkR On ubuntu 12.04/gluster 3.3.1, I have in fstab: file1:/GLUSTER-HOME/homeglusterfsdefault​s,nobootwait,backupvolfile-server=file2 00 This gives me two error messages:  1) on boot: "mountall: Disconnected from Plymouth"; 2) when I type 'mount /home', I get the error "unknown option _netdev (ignored)", apparently generated by the nobootwait option Should I be bothered by these two issues?
19:00 MarkR On ubuntu 12.04/gluster 3.3.1, I have in fstab: file1:/GLUSTER-HOME/homeglusterfsdefault​s,nobootwait,backupvolfile-server=file2 00
19:00 MarkR This gives me two error messages:
19:00 MarkR 1) on boot: "mountall: Disconnected from Plymouth";
19:00 MarkR 2) when I type 'mount /home', I get the error "unknown option _netdev (ignored)", apparently generated by the nobootwait option
19:00 MarkR Should I be bothered by these two issues?
19:01 semiosis MarkR: please ,,(pasteinfo)
19:01 glusterbot MarkR: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:01 semiosis you can ignore the _netdev ignored message
19:01 semiosis nothing to worry about
19:02 semiosis also the plymouth thing has something to do with ubuntu's boot splash screen, i wouldn't worry about that either
19:02 MarkR (I'm sorry, I'm new to IRC. Shall use pasteinfo in the future)
19:02 semiosis MarkR: pastebin sites are generally useful but i'm looking for your 'gluster volume info' output in particular
19:04 MarkR gluster volume info: http://ur1.ca/eipyj
19:04 glusterbot Title: #22821 Fedora Project Pastebin (at ur1.ca)
19:06 semiosis ok so whats the problem?
19:07 MarkR Only those error messages I get, which I will ignore from now on. The file system mounts allright, no problem!
19:20 Savaticus joined #gluster
19:29 mooperd joined #gluster
19:33 kris-- left #gluster
19:36 balunasj joined #gluster
19:38 andreask joined #gluster
20:10 failshell joined #gluster
20:36 Mo__ joined #gluster
20:40 failshell joined #gluster
20:42 stigchristian joined #gluster
20:46 Mo__ joined #gluster
20:50 yinyin joined #gluster
20:56 elfar joined #gluster
21:09 nightwalk joined #gluster
21:30 aknapp joined #gluster
21:44 daMaestro joined #gluster
22:06 xymox joined #gluster
22:14 yinyin joined #gluster
22:37 aknapp_ joined #gluster
22:45 sprachgenerator joined #gluster
23:31 aknapp joined #gluster
23:40 _pol joined #gluster
23:41 nightwalk joined #gluster
23:43 elfar joined #gluster
23:47 Debolaz joined #gluster
23:49 zykure joined #gluster
23:54 yinyin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary