Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 calum_ joined #gluster
00:41 bala joined #gluster
01:10 diegows joined #gluster
01:26 lyang0 joined #gluster
01:39 DV__ joined #gluster
01:53 harish joined #gluster
01:57 suliba joined #gluster
01:57 calisto joined #gluster
02:38 calisto joined #gluster
02:39 haomaiwang joined #gluster
02:40 julim joined #gluster
02:41 kshlm joined #gluster
02:55 bharata-rao joined #gluster
03:23 hagarth joined #gluster
03:50 lalatenduM joined #gluster
03:53 nbalachandran joined #gluster
04:00 shubhendu joined #gluster
04:01 nbalachandran joined #gluster
04:15 sputnik13 joined #gluster
04:18 ira joined #gluster
04:19 sputnik13 joined #gluster
04:31 RameshN joined #gluster
04:33 kdhananjay joined #gluster
04:34 rafi1 joined #gluster
04:38 anoopcs joined #gluster
04:44 sputnik13 joined #gluster
04:45 calisto joined #gluster
04:47 ppai joined #gluster
04:49 msmith joined #gluster
04:59 kanagaraj joined #gluster
05:03 prasanth_ joined #gluster
05:03 anoopcs joined #gluster
05:05 jiffin joined #gluster
05:08 raghu joined #gluster
05:20 bala joined #gluster
05:24 atinmu joined #gluster
05:26 ndarshan joined #gluster
05:28 R0ok_ joined #gluster
05:31 sputnik13 joined #gluster
05:32 karnan joined #gluster
05:39 kshlm joined #gluster
05:39 rjoseph joined #gluster
05:40 Humble joined #gluster
05:42 kshlm joined #gluster
05:50 msmith joined #gluster
05:50 sahina joined #gluster
05:52 kshlm joined #gluster
05:53 ramteid joined #gluster
06:01 atalur joined #gluster
06:06 overclk joined #gluster
06:10 aravindavk joined #gluster
06:15 soumya__ joined #gluster
06:18 ira joined #gluster
06:19 RaSTar joined #gluster
06:23 nbalachandran joined #gluster
06:28 meghanam joined #gluster
06:28 meghanam_ joined #gluster
06:29 kshlm joined #gluster
06:36 SOLDIERz joined #gluster
06:45 elico joined #gluster
06:47 SOLDIERz_ joined #gluster
06:51 msmith joined #gluster
07:00 kumar joined #gluster
07:00 ctria joined #gluster
07:04 glusterbot New news from newglusterbugs: [Bug 1157381] mount fails for nfs protocol in rdma volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1157381>
07:07 bala joined #gluster
07:07 kshlm joined #gluster
07:10 kshlm joined #gluster
07:12 rgustafs joined #gluster
07:14 ricky-ticky joined #gluster
07:14 aravindavk joined #gluster
07:18 atinmu joined #gluster
07:19 rjoseph joined #gluster
07:22 deepakcs joined #gluster
07:27 Slydder joined #gluster
07:32 lalatenduM joined #gluster
07:33 Fen2 joined #gluster
07:34 aravindavk joined #gluster
07:34 atinmu joined #gluster
07:38 rastar_afk joined #gluster
07:40 Fen2 Hi all :)
07:40 rastar_afk joined #gluster
07:42 ppai joined #gluster
07:44 rjoseph joined #gluster
07:52 msmith joined #gluster
07:57 harish joined #gluster
08:17 cjanbanan joined #gluster
08:23 harish joined #gluster
08:30 siel joined #gluster
08:35 glusterbot New news from newglusterbugs: [Bug 1017215] Replicated objects duplicates <https://bugzilla.redhat.com/show_bug.cgi?id=1017215>
08:37 nbalachandran joined #gluster
08:37 hybrid512 joined #gluster
08:44 rtalur_ joined #gluster
08:52 msmith joined #gluster
08:54 vikumar joined #gluster
08:54 xrubbit joined #gluster
08:54 xrubbit hi everybody
08:55 ira joined #gluster
08:56 xrubbit !list
09:02 Slashman joined #gluster
09:08 kshlm joined #gluster
09:08 kshlm joined #gluster
09:14 Norky joined #gluster
09:15 Yossarianuk joined #gluster
09:17 atinmu joined #gluster
09:21 saurabh joined #gluster
09:26 bala1 joined #gluster
09:44 kdhananjay joined #gluster
09:53 msmith joined #gluster
09:53 MrAbaddon joined #gluster
10:05 glusterbot New news from newglusterbugs: [Bug 1157457] Bad /etc/logrotate.d file installed by glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1157457> || [Bug 1157462] Dead Link - Missing Documentation <https://bugzilla.redhat.com/show_bug.cgi?id=1157462> || [Bug 1099690] unnecessary code in gf_history_changelog_done() <https://bugzilla.redhat.com/show_bug.cgi?id=1099690> || [Bug 1099922] Unchecked buffer fill by gf_readline
10:07 ndevos lalatenduM: you want to take care of bug 1157457?
10:07 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1157457 unspecified, unspecified, ---, bugs, NEW , Bad /etc/logrotate.d file installed by glusterfs
10:08 lalatenduM ndevos, checking
10:10 lalatenduM ndevos, I think it is a duplicate of bug https://bugzilla.redhat.com/show_bug.cgi?id=1126802
10:10 glusterbot Bug 1126802: high, medium, ---, lmohanty, ASSIGNED , glusterfs logrotate config file pollutes global config
10:13 ndevos lalatenduM: argh, yes, it would be :-/
10:14 lalatenduM ndevos, I have been lazy abtthe bug, will take it with priority
10:15 glusterbot New news from resolvedglusterbugs: [Bug 1157457] Bad /etc/logrotate.d file installed by glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1157457>
10:16 MrAbaddon joined #gluster
10:17 harish joined #gluster
10:18 ctria joined #gluster
10:19 ndevos lalatenduM++ thanks, add me as reviewer on the patch, and afterwards post the fix to bug 1126801 too please
10:19 glusterbot ndevos: lalatenduM's karma is now 2
10:19 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1126801 high, medium, ---, lmohanty, ASSIGNED , glusterfs logrotate config file pollutes global config
10:20 ndevos lalatenduM: I'd like to get that fix in the next beta for 3.5, so that bug 1157160 gets closed when the update is available
10:20 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1157160 unspecified, unspecified, ---, kkeithle, NEW , Bad /etc/logrotate.d file installed by glusterfs
10:20 lalatenduM ndevos, sure
10:30 mdavidson joined #gluster
10:33 Philambdo joined #gluster
10:40 atinmu joined #gluster
10:40 rjoseph joined #gluster
10:42 Guest61101 joined #gluster
10:43 ppai joined #gluster
10:44 nbalachandran joined #gluster
10:45 mbukatov joined #gluster
10:54 msmith joined #gluster
10:58 tdasilva joined #gluster
11:04 ctria joined #gluster
11:05 glusterbot New news from newglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.com/show_bug.cgi?id=1152956>
11:16 nbalachandran joined #gluster
11:20 julim joined #gluster
11:23 kanagaraj joined #gluster
11:23 mojibake joined #gluster
11:24 capri joined #gluster
11:27 atinmu joined #gluster
11:36 diegows joined #gluster
11:41 tdasilva joined #gluster
11:46 soumya_ joined #gluster
11:51 MrAbaddon joined #gluster
11:51 meghanam joined #gluster
11:51 meghanam_ joined #gluster
11:52 sputnik13 joined #gluster
11:55 msmith joined #gluster
12:00 edward1 joined #gluster
12:02 LebedevRI joined #gluster
12:05 ctria joined #gluster
12:17 ppai joined #gluster
12:19 atinmu joined #gluster
12:21 mojibake joined #gluster
12:21 virusuy joined #gluster
12:21 virusuy joined #gluster
12:30 soumya_ joined #gluster
12:39 xrubbit joined #gluster
12:43 _dist joined #gluster
12:45 ricky-ticky joined #gluster
12:52 calisto joined #gluster
12:54 torbjorn__ left #gluster
12:55 msmith joined #gluster
12:57 clutchk joined #gluster
12:58 ron-slc joined #gluster
12:58 atinmu joined #gluster
12:59 chirino joined #gluster
12:59 Fen1 joined #gluster
13:02 bene joined #gluster
13:03 dguettes joined #gluster
13:04 lalatenduM joined #gluster
13:05 rgustafs joined #gluster
13:05 glusterbot New news from newglusterbugs: [Bug 1157659] GlusterFS allows insecure SSL modes <https://bugzilla.redhat.com/show_bug.cgi?id=1157659>
13:18 ctria joined #gluster
13:18 theron joined #gluster
13:19 MrAbaddon joined #gluster
13:29 theron joined #gluster
13:34 Fen1 joined #gluster
13:38 kkeithley @later tell JoeJulian re: S30samba-*.sh hooks, seems, looks like an oversight in the 3.5 packaging. (They're in the glusterfs-server RPM for 3.6.0betaX.) I don't see that steven2 filed a BZ. :-(
13:38 glusterbot kkeithley: The operation succeeded.
13:40 RameshN joined #gluster
13:54 meghanam_ joined #gluster
13:54 meghanam joined #gluster
13:58 skippy I have a replica 2 volume on one subnet. Both servers happily talk to one another. A client on another subnet mounts this volume via FUSE.
13:58 skippy irregularly, the client reports "remote operation failed: Transport endpoint is not connected."
13:58 skippy I'm not seeing any dropped Ethernet packets.  Network folks don't see anything at the switches.
13:59 skippy I'm a little stymied as to where else ot look.
13:59 skippy the client reports "has not responded in the last 42 seconds, disconnecting." for both servers; but neither server reports errors.
14:00 skippy 10 seconds later, the client reconnects to both servers.
14:03 coreGrl joined #gluster
14:06 glusterbot New news from newglusterbugs: [Bug 1108448] selinux alerts starting glusterd in f20 <https://bugzilla.redhat.com/show_bug.cgi?id=1108448>
14:06 _dist skippy: it does sound like a network issue but it might not be an infrastructure problem it could just as easily be a gluster setup issue. What does you gluster peer network look like?
14:08 msmith joined #gluster
14:09 msmith joined #gluster
14:11 skippy _dist: three Gluster servers: two hosting bricks (replica 2), one with no bricks serving quorum only.  What, specifically, are you requesting to see?
14:13 _dist skippy: are they on the same subnet, is the gluster replica separate. Is the client on the same switch, different location. Does the client resolve using dns. During the "outage" can the client ping, do the logs in /var/log/glusterfs/bricks complain during the outage?
14:14 skippy brick-hosting servers are physical.  quorum-only server is virtual.  same subnet, different switches.  client is virtual, on different subnet.  unsure of which physical switch(es) its using.
14:15 skippy during the outage, all other activity seems fine.  client is an app server, writing uploaded files to Gluster.  web app continues to function fine during outage.
14:15 skippy gluster servers report nothing in /var/log/gluster/brick for this volume: that log is empty
14:16 _dist skippy: next I'd check the glusterd.vol.log then. During the outage your client (on the different subnet) can ping the physical gluster hosts?
14:17 skippy havent tried to ping, because outage is so irregular; and recovers so fast.
14:18 skippy I dont see glusterd.vol.log .. where should I be looking?
14:18 _dist skippy: The two simplest explanations is that one or both of the gluster services are failing during the outage, or more likely that the client's connection is failing during that time
14:18 plarsen joined #gluster
14:18 _dist skippy: it starts with etc- for the .vol.log
14:18 _dist in /var/log/glusterfs
14:19 plarsen joined #gluster
14:20 bennyturns joined #gluster
14:20 wushudoin joined #gluster
14:21 _dist skippy: I understand how frustrating unpredicable production errors are, especeially if you can't reproduce them on demand. The simplest test is to create another VM that does the same mount and see if they both have the same problem at the same time. In addition you could log pings from one or both until this happens again assuming you find nothing in all of the logs for those periods
14:21 skippy thanks.  nothing on either server in /var/log/gluster/etc-... during the latest recorded outage from the client.
14:21 _dist skippy: btw gluster logs usually use UTC, not your machines local time
14:22 skippy yeah, can I set gluster to use localtime?  I hate converting time.
14:22 _dist skippy: I am just as annoyed as you :) I have not found a way to do it beyond parsing the log
14:23 skippy on client, i have "[2014-10-25 06:34:56.879706] C [client-handshake.c:127:rpc_client_ping_timer_expired] 0-integration_epa-client-0: server 192.168.30.107:49156 has not responded in the last 42 seconds, disconnecting."
14:23 skippy nothing at that timestamp on either server.  On one server, I have log entries at 6:30, and then 6:39.  Other server has entries at 6:32 and 6:45
14:25 skippy nothing on the third Gluster server, either; but I guess that's to be expected since it has no bricks.
14:27 skippy _dist: https://gist.github.com/skpy/d0e46538c14b126cf7a7  that's what the clietn reports.
14:27 glusterbot Title: gist:d0e46538c14b126cf7a7 (at gist.github.com)
14:28 _dist skippy: I'd take a close look at the logs based on current activity, iirc some use UTC but some use local time (I might be wrong on that). Either there won't be a ton of stuff in there usually. I suspect since the server logs aren't complaining of anything this is a client access issue
14:28 _dist skippy: are you server and client versions similar? it's better to run them close, I've not experienced problems myself running versions far apart but the recommended practice is to run the same versions
14:28 skippy yes, it smells like a client issue.
14:29 skippy 3.5.2-1.el6.x86_64 across the board, from Gluster repo.
14:29 skippy sorry, el7 for servers; el6 for client.
14:29 _Bryan_ joined #gluster
14:29 DV joined #gluster
14:30 rcaskey joined #gluster
14:30 skippy but all pulling from Gluster repo, same major/minor version.
14:30 _dist skippy: we're running the same for a file share proxy (gluster --> smb) so I suspect it's either a setup or network issue with the client
14:31 skippy i added the fstab entry to that gist linked above.  i don't think we're doing anything particularly goofy.
14:36 skippy does Gluster use raw ICMP or some other kind of ping?  It appears we are filtering ICMP echo from the client to the server subnets.
14:36 skippy but that seems like a red herring, since the volume mounts up just fine
14:44 jobewan joined #gluster
14:45 kkeithley Gluster does everything using RPC over TCP.
14:46 xrubbit joined #gluster
14:50 xrubbit joined #gluster
14:52 skippy how might I then try to diagnose transient connection issues between a RHEL6 client on one subnet, and a pair of RHEL7 GLuster servers on a different subnet?
14:53 _dist skippy: sorry for the long response wait. If I were you I'd setup another VM with the same estup (because I'm lazy) I suppose the correct way to do it is a packet capture, another lazy way is a ping test but it won't tell your more than the fact that it isn't the gluster clients fault
14:56 skippy thanks _dist.
14:58 calisto joined #gluster
14:59 kumar joined #gluster
15:01 theron joined #gluster
15:05 B21956 joined #gluster
15:07 sputnik13 joined #gluster
15:08 lpabon joined #gluster
15:09 overclk joined #gluster
15:13 jbrooks joined #gluster
15:18 bene joined #gluster
15:21 nbalachandran joined #gluster
15:27 edong23 joined #gluster
15:27 lmickh joined #gluster
15:31 overclk joined #gluster
15:38 diegows joined #gluster
16:01 plarsen joined #gluster
16:07 theron joined #gluster
16:11 RameshN joined #gluster
16:13 soumya__ joined #gluster
16:15 PeterA joined #gluster
16:15 kshlm joined #gluster
16:15 diegows joined #gluster
16:16 chirino joined #gluster
16:16 overclk joined #gluster
16:20 xrubbit joined #gluster
16:20 R0ok_ joined #gluster
16:24 overclk_ joined #gluster
16:40 overclk joined #gluster
16:41 jbrooks joined #gluster
16:43 R0ok_|mkononi joined #gluster
16:44 R0ok_|mkononi stickyboy: corp net is also down
16:49 jobewan joined #gluster
16:50 zerick joined #gluster
16:53 sputnik13 joined #gluster
16:59 kr0w joined #gluster
17:02 sazze joined #gluster
17:03 sazze hello gluster community and support, I have done something terrible -- gluster volume add-brick ops replica 3 10.0.28.53:/data/ops/
17:03 sazze replica 3 seems to be a real dumb thing to have done, in retrospect.  I was testing replication speeds and now can NOT put back to replica 2
17:03 sazze config is now "Type: Distributed-Replicate"
17:03 sazze "Number of Bricks: 1 x 2 = 3"
17:03 sazze help please!
17:04 skippy can you remove brick and tell it replica 2 now?
17:04 sazze not via cli, command
17:04 sazze gluster volume remove-brick ops replica 2 10.0.28.53:/data/ops start
17:04 sazze fails error
17:05 skippy what error?
17:05 sazze number of bricks provided (1) is not valid. need at least 2 (or 2xN)
17:05 sazze I'm running 331
17:06 sazze all the inodes and data sizes on all the servers is a 100% match, so it looks like it is a replica, not a distr.  just need to get back to where I was last night before this misguided test
17:06 semiosis that's pretty old
17:07 hagarth sazze: might be worth restarting glusterd and then attempt remove-brick
17:07 cfeller_ joined #gluster
17:07 sazze hagarth, tempting, if that does not work, what do you think of rewriting the config files to what the were prior to this fiasco?
17:08 semiosis istr diagnosing this issue a long time ago.  trying to find the bug report
17:08 sazze thanks semiosis, and yes 331 is a little old but moving up meant for all kinds of os differences...
17:08 hagarth semiosis: I vaguely recollect fixing this bug
17:09 semiosis hagarth++
17:09 glusterbot semiosis: hagarth's karma is now 4
17:09 JoeJulian Pretty sure that was fixed in the 3.3 series.
17:09 hagarth sazze: yes, rewriting the config files would probably be the only option then.
17:11 sazze Hi JoeJulian, I've been all over your blog prior to losing my brain last night, thanks for the great posts.  I have some questions about http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/
17:11 glusterbot Title: GlusterFS bit by ext4 structure change (at joejulian.name)
17:11 semiosis ask away
17:12 sazze this ext4 vs xfs thing, does this only matter for 32 bit clients and/or clients using nfsv2?
17:14 semiosis iirc it affected all clients of servers with ext4 bricks and kernels with the change (which was introduced in linux mainline 3.3.0 and backported widely)
17:16 sazze is there any chance the problem also extends to cause very very slow self healing?
17:16 semiosis doubt it
17:17 sazze what does slow down healing?  What does a server need more of to heal faster?
17:17 sazze all my inodes propogated after a day or so, but files on new brick (replica 2, not this replica 3 mess) are taking days
17:17 sazze even with stats
17:17 semiosis depends on the files but could be cpu cycles, network latency, throughput, or some combo
17:18 sazze all the files are there as 0-size, and clients are getting served the 0's :(
17:18 soumya joined #gluster
17:18 sazze on occassion
17:18 sazze 25-35% of the time, needs a retry
17:18 semiosis the 0-len files appear on a brick when the directory is healed, then the data should get filled in eventually, or when the file is accessed
17:18 sazze I have ganglia stats showing plenty of network avail
17:19 sazze supposing there are overwhelmin
17:19 semiosis if a client sees empty files then i'd check to make sure it's connected to all bricks, maybe remount the client
17:19 sazze nfs vs heal contention, gluster seems to drop the healing, no?
17:19 kkeithley joined #gluster
17:20 sazze remounted on my test client to no avail -- do I need to remount every client?
17:24 vikumar joined #gluster
17:25 ekuric joined #gluster
17:31 vikumar joined #gluster
17:45 lpabon joined #gluster
17:54 R0ok_|kejani joined #gluster
17:59 ninkotech joined #gluster
18:00 ninkotech_ joined #gluster
18:01 zerick joined #gluster
18:15 MacWinner joined #gluster
18:20 diegows joined #gluster
18:22 rotbeard joined #gluster
18:25 xrubbit joined #gluster
18:30 lalatenduM joined #gluster
18:35 virusuy joined #gluster
18:35 virusuy joined #gluster
18:42 the-me semiosis: :)
18:42 xrubbit ehi =)
18:57 semiosis the-me: hi!
18:57 and` joined #gluster
18:58 rshott joined #gluster
18:59 xrubbit how can i create a cluster on debian??
18:59 xrubbit failover
19:00 the-me semiosis: you have got mail :)
19:01 semiosis don't see it yet
19:02 _dist xrubbit: what do you mean by failover? What are you hoping to setup (use case etc)
19:04 the-me you have greylisted me :'(
19:04 the-me *whine*
19:04 the-me *postfix flush* *postfix flush* :D
19:05 xrubbit i mean for cluster failover
19:05 xrubbit how can i setup?
19:06 semiosis the-me: pm
19:23 n-st joined #gluster
19:26 semiosis xrubbit: glusterfs fuse clients provide automatic HA
19:27 xrubbit gllusterfs is not for storage??
19:27 semiosis xrubbit: glusterfs is a virtual filesystem used to combine local storage from many servers
19:31 jobewan joined #gluster
19:32 xrubbit joined #gluster
19:33 xrubbit_ joined #gluster
19:33 xrubbit_ hi
19:33 glusterbot xrubbit_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:34 semiosis xrubbit: glusterfs is a virtual filesystem used to combine local storage from many servers
19:35 xrubbit is a cluster storage solution??
19:36 _dist xrubbit: it's not block storage, or an FS really. It's a layer that runs over those things to keep them in sync. So you'd have something like xfs on two servers, you write to one and they both have it instantly (with replication mode)
19:37 xrubbit automatically?
19:37 _dist well, FS is a pretty broad term, so I wouldn't object to calling it an FS
19:37 _dist xrubbit: synchronously by default, async if you want
19:37 xrubbit wow =)
19:37 xrubbit i can setup??
19:38 _dist http://www.gluster.org/community/documentation/index.php/QuickStart
19:39 _dist xrubbit: the basic idea is each "volume" is made up of "bricks". Each brick sits on a formatted FS like ext, xfs, whatever really. You create a volume with "gluster volume create" and point it to all the bricks you want in the volume. I recommend for testing and to get to know it you can just create both bricks on the same server
19:40 xrubbit joined #gluster
19:40 xrubbit_ joined #gluster
19:40 _dist xrubbit: One rule though, never write directly to the brick locations, always write through a gluster api (nfs, glusterfs fuse, libgfapi, vfs). If you write data directly without letting gluster know that data won't be considered something that needs to be healed or sync'd
19:42 semiosis _dist++
19:42 glusterbot semiosis: _dist's karma is now 1
19:43 semiosis _dist++
19:43 glusterbot semiosis: _dist's karma is now 2
19:43 _dist :)
19:43 xrubbit wow its really great
19:43 samsaffron___ joined #gluster
19:43 xrubbit i can setup glusterfs on centos 7??
19:44 _dist xrubbit: yes, RHEL is actually a big part of gluster. We run ours on debian/ubuntu but I've ran it on centOS 6 and talked to people running it on 7 should be pretty simple
19:45 xrubbit so you can help too LVM?
19:46 _dist xrubbit: you can use LVM, that might be out of the scope of this channel though, unless someone wants to pipe in who uses LVM as their base for gluster. I believe redhat recommends XFS, I'm using ZFS.
19:46 xrubbit what is the redhat channel?
19:47 _dist xrubbit: no idea :) never looked, but gnome-disk-utility is a decent GUI that helps you setup LVM if you want to use it instead
19:47 lalatenduM joined #gluster
19:48 xrubbit any channel for linux tutorial??
19:48 _dist xrubbit: the people in #debian, #ubuntu, and #linux are probably the biggest channels for that (by # of users)
19:49 xrubbit thank you so much _dist
19:49 xrubbit really really thanks
19:49 xrubbit =)
19:50 _dist xrubbit: np, good luck with your install. There's a bit of a leraning curve, but in my experience gluster is the easiest clustered FS to setup
19:50 xrubbit and for create an HPC or failover server??
19:50 xrubbit can i use heartbeat??
19:51 xrubbit l
19:51 _dist xrubbit: you won't need any special tools for quorum or failover it handles all that from a storage perspective on its' own
19:51 xrubbit no no for create a failover database
19:51 _dist xrubbit: if a server goes down, when it comes back up it automatically heals
19:51 xrubbit gluster do all this??or is only for storage??
19:52 _dist xrubbit: gluster does all of that, as long as you have enough bricks up (sort of like raid) your storage network will just stay up, and bricks that recover will bring themselves back into sync without intervention
19:54 xrubbit joined #gluster
19:54 xrubbit ll
19:54 xrubbit gluster do all this??or is only for storage??
19:54 _dist xrubbit: (repost, you were gone) gluster does all of that, as long as you have enough bricks up (sort of like raid) your storage network will just stay up, and bricks that recover will bring themselves back into sync without intervention
19:54 xrubbit ah ok
19:55 _dist xrubbit: but it depends on what you mean by only storage
19:55 xrubbit and for database failover ?
19:55 xrubbit what cn i use=
19:55 madphoenix joined #gluster
19:56 madphoenix Hi all, quick question.  When upgrading from 3.4.x to 3.5.x, is it necessary to stop the volume?  The docs say to just stop all glusterd/glusterfsd processe, but not that the volume should be stopped first.
19:56 xrubbit joined #gluster
19:56 _dist xrubbit: well it depends, we use VMs hosted on gluster. However, the right way to do DB HA in my opinion is at the software DB level (two DBs running and keeping in sync through their application). You can't always trust machines to fsync
19:56 _dist xrubbit: (repeat again, you drop a lot) well it depends, we use VMs hosted on gluster. However, the right way to do DB HA in my opinion is at the software DB level (two DBs running and keeping in sync through their application). You can't always trust machines to fsync
19:56 _dist madphoenix: I believe that was only if you have files healing in "gluster volume heal vol info" that it's not safe
19:57 semiosis madphoenix: you should be able to do each server individually (all servers before any clients) while keeping the rest online
19:57 madphoenix since i'm scheduling a full downtime anyways, is it "safer" to do a volume stop, then perform upgrades, then volume start?
19:57 madphoenix as opposed to applying them one at a time
19:57 madphoenix in a distributed volume
19:57 madphoenix (no rep)
19:58 semiosis oohhh distributed, yeah you really want all bricks online all the time or file ops will fail
19:58 madphoenix exactly, thats what i was thinking
19:58 xrubbit thank you _dist
19:58 madphoenix i was just surprised that the docs don't recommend volume stop
19:58 semiosis everyone but you uses replication :)
19:59 madphoenix i've been getting that feeling! ;)
19:59 _dist semiosis: :) I have honestly not even tried distributed volumes yet, except by accident (not a fun accident)
20:00 madphoenix still trying to convince management that it's a good idea to run replication + RAID, but to be fair our availability requirements for this volume are pretty trivial.
20:00 madphoenix in any case, thanks as always for the advice folks'
20:01 semiosis yw
20:06 xrubbit im going out...
20:06 xrubbit thanks at all
20:06 xrubbit bye
20:29 theron joined #gluster
20:37 glusterbot New news from newglusterbugs: [Bug 1157839] link(2) corrupts meta-data of encrypted files <https://bugzilla.redhat.com/show_bug.cgi?id=1157839>
20:39 sputnik1_ joined #gluster
20:40 sputnik13 joined #gluster
20:44 JoeJulian heh, coincidence... brick pid is 24007
21:36 ekuric left #gluster
21:40 n-st joined #gluster
22:01 zerick joined #gluster
22:12 ricky-ti1 joined #gluster
22:28 sputnik13 joined #gluster
22:30 badone joined #gluster
22:44 badone joined #gluster
22:49 MrAbaddon joined #gluster
22:51 semiosis JoeJulian: https://twitter.com/pragmaticism/status/420263440075087872
22:51 glusterbot Title: GHOULISH SPOOKerman on Twitter: "WAT?!?!?! http://t.co/p1mJLyoVfZ" (at twitter.com)
23:00 7GHAAGPRE joined #gluster
23:08 JoeJulian semiosis: Caught the mileage on my car as a palindrome last week...
23:16 semiosis hah
23:40 firemanxbr joined #gluster
23:48 andreask joined #gluster
23:56 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary