Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 cfeller joined #gluster
00:01 mkzero joined #gluster
00:05 purpleidea anyone know why this is failing, when i run a gluster volume create, i get: http://paste.fedoraproject.org/65182/13885347
00:05 glusterbot Title: #65182 Fedora Project Pastebin (at paste.fedoraproject.org)
00:05 purpleidea in my cli.log but no other logs anywhere else...
00:06 purpleidea command is: http://paste.fedoraproject.org/65183/34802138
00:06 glusterbot Title: #65183 Fedora Project Pastebin (at paste.fedoraproject.org)
00:44 jporterfield joined #gluster
00:54 jporterfield joined #gluster
01:01 tryggvil joined #gluster
01:09 CROS___ left #gluster
01:14 MrNaviPacho joined #gluster
01:33 jporterfield joined #gluster
01:40 jporterfield joined #gluster
01:52 jporterfield joined #gluster
01:56 mkzero joined #gluster
02:05 vpshastry joined #gluster
02:33 bharata-rao joined #gluster
02:33 jporterfield joined #gluster
02:45 jporterfield joined #gluster
02:50 jag3773 joined #gluster
02:52 Freeheart joined #gluster
02:55 mkzero joined #gluster
02:58 jporterfield joined #gluster
02:59 Freeheart I have a question that's so simple I'm embarassed to ask, but I can't seem to find the answer to the specific question I am wondering. Say I have 4 storage nodes (servers, VMs, whatever, each labeled server1, server2, et cetera) each containing a single brick, replicated to each node. I have n "compute" nodes that mount the Gluster volume in /etc/fstab. Due to "some reason", any single node drops completely offline. I know Gluster maintains da
02:59 Freeheart ta integrity, but if fstab specified the server that went down, that computer node no longer has access to the Gluster volumes, correct? What is the best practice for making Gluster volumes themselves HA as exposed to such compute nodes?
03:27 jporterfield joined #gluster
04:04 divbell freeheart, are you using the native client or nfs?
04:06 divbell the native client supports transparent failover but the client has to be able to resolve the volume brick hostname of the replica
04:06 divbell see, e.g.: http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Automatically_Mounting_Volumes
04:06 glusterbot Title: Gluster 3.2: Automatically Mounting Volumes - GlusterDocumentation (at www.gluster.org)
04:06 divbell Note: The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount).
04:07 divbell you can probably verify that behavior with any libpcap program (tcpdump, wireshark, etc) or netstat
04:25 mohankumar__ joined #gluster
04:26 dhyan joined #gluster
04:37 satheesh joined #gluster
04:38 Freeheart divbell, Thanks! That was the perfect answer. Both for explaining the behavior, and for giving me docs for it. :D
04:38 Freeheart Digging in.
04:38 Freeheart Native client. All of the compute nodes support Gluster.
04:46 jporterfield joined #gluster
04:48 skullone left #gluster
04:52 vpshastry joined #gluster
04:58 satheesh joined #gluster
05:12 mkzero joined #gluster
05:15 jporterfield joined #gluster
05:20 jporterfield joined #gluster
05:22 psyl0n joined #gluster
05:25 marcoceppi joined #gluster
05:25 marcoceppi joined #gluster
05:30 divbell joined #gluster
05:39 jporterfield joined #gluster
05:39 satheesh_ joined #gluster
06:15 davinder joined #gluster
06:23 jporterfield joined #gluster
06:29 vimal joined #gluster
06:43 CROS_ joined #gluster
06:49 vimal joined #gluster
06:55 vikumar joined #gluster
07:01 mohankumar__ joined #gluster
07:02 vpshastry left #gluster
07:14 vimal joined #gluster
08:38 jporterfield joined #gluster
08:50 jporterfield joined #gluster
08:52 _ilbot joined #gluster
08:52 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
09:02 jporterfield joined #gluster
09:08 jporterfield joined #gluster
09:20 jporterfield joined #gluster
09:21 harish_ joined #gluster
09:30 jporterfield joined #gluster
09:47 jporterfield joined #gluster
09:59 jporterfield joined #gluster
10:07 jporterfield joined #gluster
10:31 jporterfield joined #gluster
10:32 Oneiroi joined #gluster
11:07 satheesh_ joined #gluster
11:08 sticky_afk joined #gluster
11:08 stickyboy joined #gluster
11:12 satheesh__ joined #gluster
11:29 jporterfield joined #gluster
11:37 jporterfield joined #gluster
11:44 bharata_ joined #gluster
11:54 Oneiroi joined #gluster
12:02 vpshastry joined #gluster
12:03 vpshastry left #gluster
12:06 rotbeard joined #gluster
12:06 mkzero joined #gluster
12:23 satheesh joined #gluster
12:27 dhyan joined #gluster
12:36 tryggvil joined #gluster
12:56 mkzero joined #gluster
13:04 tryggvil joined #gluster
13:10 jporterfield joined #gluster
13:16 jporterfield joined #gluster
13:21 jporterfield joined #gluster
13:27 stickyboy joined #gluster
13:33 hagarth joined #gluster
13:38 stickyboy joined #gluster
13:39 jporterfield joined #gluster
13:46 tryggvil joined #gluster
13:52 tryggvil joined #gluster
14:08 mkzero joined #gluster
14:12 tor joined #gluster
14:41 flrichar joined #gluster
14:47 Staples84 joined #gluster
15:25 jag3773 joined #gluster
15:51 xymox joined #gluster
16:09 glusterbot New news from resolvedglusterbugs: [Bug 795419] bring in event-history mechanism based on circular buffer <https://bugzilla.redhat.com/show_bug.cgi?id=795419>
17:09 mkzero joined #gluster
17:56 sticky_afk joined #gluster
17:56 stickyboy joined #gluster
17:58 blaa joined #gluster
18:33 andreask joined #gluster
18:42 psyl0n joined #gluster
18:48 blaa after rebooting a server and running gluster volume status i get
18:48 blaa Another transaction is in progress. Please try again after sometime.
18:49 blaa i understand its the bug 981661 but i dont understand the resultion maybe someone knows what this mean
18:49 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=981661 high, high, ---, kdhananj, CLOSED ERRATA, quota + core: Another transaction is in progress. Please try again after sometime.
18:49 blaa Upon receiving DISCONNECT from the originator of a transaction, on the rest
18:49 blaa of the nodes, perform the following actions:
18:49 blaa a. Release the lock; and
18:49 blaa b. reset the state of the node to GD_OP_STATE_DEFAULT.
18:50 blaa what is realse the lock and reset state to GD....
18:58 calum_ joined #gluster
19:27 blaa anybody knows how to realese a lock originated by the storage originator?
19:32 Freeheart left #gluster
19:36 Mick27 joined #gluster
19:36 Mick27 howdy folks
19:36 Mick27 anyone around ?
19:36 Mick27 I have some kind of an issue
19:36 Mick27 and google does not help
19:37 Mick27 we changed our ip addressing, requiring me to reboot the server
19:37 Mick27 servers
19:37 Mick27 the ip didn't change, just the hosting underlying system
19:37 Mick27 anyhow my gluster setup does not mount the disk
19:37 Mick27 and mount -a gives me
19:37 Mick27 mount.nfs: requested NFS version or transport protocol is not supported
19:37 glusterbot Mick27: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
19:39 Mick27 ok, how weird, everything just automagically came back
19:39 Mick27 as if touch /etc/fstab did some trick
19:41 Mick27 well not exactly
19:41 Mick27 tried rebootinh
19:41 Mick27 same error
19:45 Mick27 what can cause delays for a volume to be up at boot ?
19:47 jag3773 joined #gluster
19:49 T0aD- joined #gluster
19:50 sroy joined #gluster
19:51 blaa wronge configuration or problems with the storage functionality
19:52 Mick27 blaa: my system has been up for several months
19:52 Mick27 expect from the aptitude upgrade
19:52 Mick27 I haven't touched anything gluster wise
19:52 juhaj_ joined #gluster
19:52 blaa you cannot just change IPs
19:52 Mick27 I didn't change them
19:52 Mick27 just the underlying conf
19:53 blaa what conf?
19:53 psyl0n joined #gluster
19:53 Mick27 basically I went from the twisted hosting company old conf
19:53 Mick27 to the new more standart one
19:53 Mick27 == some routes changed, but the ip stayed the same
19:54 Mick27 I have some hack in my config, I wait for 20seconds on boot
19:54 blaa look at /var/lib/gluster.log from client side to see why its not mounting
19:54 Mick27 then mount -a
19:54 Mick27 ok
19:54 Mick27 that is what I was looking for
19:55 Mick27 hmm
19:55 blaa try to mount with the options -o log-level=WARNING,log-file=/var/log/gluster.log
19:55 Mick27 possibly some change from debian on nfs
19:55 blaa no
19:55 Mick27 directly in the fstab?
19:55 blaa you can test on command line first
19:55 Mick27 k
19:56 blaa mount -t glusterfs -o log-level=WARNING,log-file=/var/log/gluster.log NODE_NAME:/VOLNAME /mnt
19:56 blaa change MODE_NAME & VOLNAME
19:58 Mick27 blaa: I am using nfs though
19:58 Mick27 could be my issue is not related to gluster itself ?
19:59 Mick27 somehow i rebooted a node
19:59 Mick27 and it mounted just fine
19:59 blaa could be many resonses
20:00 Mick27 hmm
20:00 Mick27 seems to be back to normal
20:00 Mick27 I'll try this logging when I can
20:00 Mick27 many thanks blaa
20:01 klaxa joined #gluster
20:02 stigchri1tian joined #gluster
20:02 NeatBasis joined #gluster
20:02 tjikkun joined #gluster
20:02 ron-slc joined #gluster
20:02 XpineX joined #gluster
20:03 nonsenso_ joined #gluster
20:03 xavih_ joined #gluster
20:03 xymox joined #gluster
20:04 jiqiren joined #gluster
20:04 blaa sure, i wish somebody knew the issue im having now which is kind of critical.. i mean i hope :)
20:05 hagarth1 joined #gluster
20:10 _mattf joined #gluster
20:15 Slasheri joined #gluster
20:16 skered- joined #gluster
20:34 kevin joined #gluster
20:35 Kevin10 Hi! I have a short question concerning replicated volumes: In my current test setup one replicated brick (out of 2) failed (HDD Crash). Is it possible to simply "dd if=working_replicated_brick of=new_brick" and replaced the failed brick with new_brick afterwards?
20:55 yosafbridge joined #gluster
20:59 sroy joined #gluster
21:07 tryggvil joined #gluster
21:14 hagarth joined #gluster
21:20 F^nor joined #gluster
21:23 Zylon joined #gluster
21:26 F^nor joined #gluster
21:45 zwu joined #gluster
22:04 mkzero joined #gluster
22:04 badone joined #gluster
22:13 rotbeard joined #gluster
22:36 qdk joined #gluster
23:00 CROS_ joined #gluster
23:01 Mick27 left #gluster
23:22 psyl0n joined #gluster
23:47 r0b joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary