Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-02-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:02 theron joined #gluster
00:10 JoeJulian @brick order
00:10 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
00:25 ovaistariq joined #gluster
00:32 nekrodesk joined #gluster
00:52 shyam joined #gluster
00:53 cliluw joined #gluster
00:56 theron joined #gluster
01:01 nishanth joined #gluster
01:02 haomaiwang joined #gluster
01:02 ovaistariq joined #gluster
01:11 calavera joined #gluster
01:27 B21956 joined #gluster
01:33 cliluw joined #gluster
01:37 JPaul joined #gluster
02:03 haomaiwa_ joined #gluster
02:05 harish joined #gluster
02:06 calavera_ joined #gluster
02:14 DJCl34n joined #gluster
02:15 DJClean joined #gluster
02:15 Vaelatern joined #gluster
02:15 armyriad joined #gluster
02:21 cholcombe joined #gluster
02:21 gem joined #gluster
02:23 jmarley joined #gluster
02:28 calavera joined #gluster
02:50 calavera_ joined #gluster
02:53 calavera joined #gluster
02:56 calaver__ joined #gluster
03:03 haomaiwa_ joined #gluster
03:15 rcampbel3 joined #gluster
03:17 ovaistariq joined #gluster
03:17 jvandewege joined #gluster
03:20 RameshN joined #gluster
03:27 kanagaraj joined #gluster
03:34 bharata-rao joined #gluster
03:42 nbalacha joined #gluster
03:43 shubhendu joined #gluster
03:47 ramteid joined #gluster
03:48 nekrodesk joined #gluster
03:49 vmallika joined #gluster
03:52 Wizek joined #gluster
03:53 Manikandan joined #gluster
03:54 shyam joined #gluster
03:54 shyam1 joined #gluster
03:57 atinm joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 itisravi joined #gluster
04:25 Manikandan joined #gluster
04:31 SOLDIERz joined #gluster
04:33 RameshN joined #gluster
04:37 ppai joined #gluster
04:45 RameshN joined #gluster
04:49 purpleidea joined #gluster
04:49 purpleidea joined #gluster
04:55 jiffin joined #gluster
04:56 kshlm joined #gluster
04:57 nehar joined #gluster
05:00 julim joined #gluster
05:01 _ndevos joined #gluster
05:01 haomaiwang joined #gluster
05:03 ndarshan joined #gluster
05:03 pppp joined #gluster
05:03 shubhendu_ joined #gluster
05:05 kdhananjay joined #gluster
05:08 unlaudable joined #gluster
05:11 poornimag joined #gluster
05:24 aravindavk joined #gluster
05:26 Manikandan joined #gluster
05:26 shubhendu__ joined #gluster
05:27 gowtham joined #gluster
05:29 Apeksha joined #gluster
05:30 jiffin joined #gluster
05:31 ramky joined #gluster
05:32 poornimag joined #gluster
05:44 atalur joined #gluster
05:47 purpleidea joined #gluster
05:50 vmallika joined #gluster
05:51 ashiq joined #gluster
05:53 shubhendu_ joined #gluster
05:56 vimal joined #gluster
05:57 Bhaskarakiran joined #gluster
05:58 dlambrig_ joined #gluster
05:58 kshlm joined #gluster
06:00 shubhendu__ joined #gluster
06:01 6A4AB04UE joined #gluster
06:01 rafi joined #gluster
06:04 R0ok_ joined #gluster
06:16 karnan joined #gluster
06:16 karthikfff joined #gluster
06:19 skoduri joined #gluster
06:20 poornimag joined #gluster
06:21 hgowtham joined #gluster
06:24 dlambrig_ joined #gluster
06:27 jiffin poornimag: https://public.pad.fsfe.org/p/Up​stream_Regression_Bad_test_list
06:27 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
06:27 rcampbel3 joined #gluster
06:28 pg joined #gluster
06:29 purpleidea joined #gluster
06:29 purpleidea joined #gluster
06:32 rafi joined #gluster
06:32 aravindavk joined #gluster
06:33 Manikandan joined #gluster
06:34 rjoseph joined #gluster
06:34 lalatenduM joined #gluster
06:34 jiffin joined #gluster
06:35 msvbhat joined #gluster
06:36 skoduri joined #gluster
06:36 kshlm joined #gluster
06:36 shruti joined #gluster
06:36 sac joined #gluster
06:37 nbalacha joined #gluster
06:37 pppp joined #gluster
06:37 atalur joined #gluster
06:37 kanagaraj joined #gluster
06:37 rastar joined #gluster
06:37 karnan joined #gluster
06:37 atinm joined #gluster
06:37 ashiq joined #gluster
06:37 Bhaskarakiran joined #gluster
06:37 gowtham joined #gluster
06:37 ppai joined #gluster
06:37 hgowtham joined #gluster
06:37 Apeksha joined #gluster
06:37 shyam joined #gluster
06:38 kdhananjay joined #gluster
06:38 vmallika joined #gluster
06:38 karthikfff joined #gluster
06:39 shyam1 joined #gluster
06:44 pg joined #gluster
06:46 RameshN joined #gluster
06:49 itisravi joined #gluster
06:55 arcolife joined #gluster
07:01 haomaiwa_ joined #gluster
07:06 Saravanakmr joined #gluster
07:10 shubhendu_ joined #gluster
07:12 kovshenin joined #gluster
07:13 baojg joined #gluster
07:16 shubhendu__ joined #gluster
07:18 Bhaskarakiran joined #gluster
07:20 jtux joined #gluster
07:23 purpleidea joined #gluster
07:24 renout left #gluster
07:29 Bhaskarakiran joined #gluster
07:33 SOLDIERz joined #gluster
07:34 [Enrico] joined #gluster
07:35 mhulsman joined #gluster
07:35 baojg joined #gluster
07:37 shubhendu_ joined #gluster
07:38 Bhaskarakiran joined #gluster
07:43 renout joined #gluster
07:55 baojg joined #gluster
08:01 haomaiwa_ joined #gluster
08:11 nishanth joined #gluster
08:13 wnlx joined #gluster
08:14 rcampbel3 joined #gluster
08:14 djgerm any ideas why a a gitfs mount wouldn't mount on boot?
08:17 djgerm *glusterfs :)
08:18 djgerm it's in fstab:     git01.lab.com:git         /mnt/gfs        glusterfs       defaults,_netdev,log-file=/var/log/gfsmount.log  0 0
08:18 ivan_rossi joined #gluster
08:18 bhuddah djgerm: is it gitfs, gfs or glusterfs?
08:19 djgerm hehe it's glusterfs mounting the git volume and /mnt/gfs
08:19 djgerm and = at
08:19 djgerm for a git server
08:19 bhuddah that's a little confusing.
08:19 bhuddah but ok...
08:20 djgerm well yeah
08:20 djgerm it is… hehe.
08:20 bhuddah a minor thing in the syntax: it should be git01.lab.com:/git i think.
08:20 djgerm hmm well "mounta -a" works and that reads from fstab
08:20 bhuddah you have any errors in the logs or in dmesg?
08:23 djgerm in that gfsmount.log:     [2016-02-11 08:21:00.457609] W [socket.c:514:__socket_rwv] 0-git-client-1: readv failed (No d                           ata available)
08:23 [diablo] djgerm, we're having exactly the same problem...
08:24 [diablo] djgerm, with RHGS
08:24 djgerm i thought that _netdev would do it....
08:24 [diablo] yup same here...
08:24 [diablo] we've got a case open with RH ... but we're still only getting bull info from a 1st liner in India
08:25 bhuddah hm. maybe up the log level to debug?
08:25 [diablo] FYI we debugged a bit, our original issue was it could not communicate with the gluster daemon
08:25 bhuddah the local gluster daemon?
08:25 [diablo] we found the glusterd.service file needed to be tweaked to start a little later
08:26 [diablo] but, after that, while it could connect to the daemon, it still did not mount :) ... once booted, a mount -a worked
08:27 kanagaraj joined #gluster
08:27 ndarshan joined #gluster
08:27 djgerm hmm. interesting. I have an issue/non issue with glusterfs-server starting fast enough after install...
08:27 kshlm Are you guys trying to get the mount happen on a server?
08:27 kshlm [diablo] djgerm ^
08:27 ndarshan joined #gluster
08:27 djgerm it's localhost
08:27 djgerm i just used the fqdn to make it easier
08:27 [diablo] I'm mounting to the itself ..
08:28 [diablo] same as djgerm
08:28 kshlm And which distro are you using?
08:28 kshlm el7?
08:28 djgerm yeah. although the gluster volume "git" is a replica 4
08:28 [diablo] RHEL7 + RHGS on top
08:29 djgerm I am UBuntu 14.04
08:29 djgerm 3.4.2
08:29 kshlm Basically, the problem for both of you is that the bricks haven't started yet.
08:29 kshlm Before the init system attempts to mount the gluster volume.
08:30 djgerm sounds right!
08:30 [diablo] OK sounds legit
08:30 kshlm GlusterD starts bricks once it connects to other glusterds on other servers.
08:30 kshlm But it to do that the network needs to be up.
08:31 kshlm When the network comes up, the init system does the _netdev mounts.
08:31 karnan joined #gluster
08:31 kshlm So it could happen that the bricks haven't started by the time mount happens.
08:31 kshlm We're trying to figure out how to solve this.
08:31 purpleidea joined #gluster
08:31 purpleidea joined #gluster
08:32 bhuddah hm. mount with background and retry maybe?
08:32 djgerm "sleep 100, mount"
08:32 djgerm :)
08:32 bhuddah or "mount -a" in rc.local ^^
08:32 [diablo] kshlm, the frustrating thing is this: RH sell us this product. They then have a 'helper' script in the hooks which adds a fstab entry, but it doesn't work
08:33 [diablo] kshlm, you're working for Gluster/RH?
08:33 kshlm djgerm, that would work if you are open to writing your own init scripts.
08:33 djgerm i am not :)
08:33 kshlm [diablo], Yes I do.
08:33 [diablo] ah cool
08:33 kshlm [diablo], IIRC, that helper script is used to add a mount entry for samba.
08:33 [diablo] voila
08:34 [diablo] yup, we're adding on CTDB
08:34 djgerm i mean.. ok… I am open to it… but I like to do it "right" and I'm no Release Engineer
08:34 kshlm [diablo], I might have put down the possible solution on some bug report. I'll see if I can find it for you.
08:35 [diablo] thank you kshlm
08:35 kshlm djgerm, In your case, I have no other solution than adding seperate init-scripts for mounting gluster volumes.
08:35 fsimonce joined #gluster
08:36 djgerm well you know… it kinda makes sense
08:36 djgerm to not mount a distributed volume from a system that may have gone down due to untoward reasons
08:36 kshlm djgerm, I've also not investigated the issue a lot on upstart systems, so I might be wrong as well.
08:36 kshlm s/a lot/at all/
08:36 glusterbot What kshlm meant to say was: djgerm, I've also not investigated the issue at all on upstart systems, so I might be wrong as well.
08:37 [diablo] brb
08:37 djgerm WHOA!
08:37 djgerm do most chat bots do that?!
08:38 kshlm glusterbot does.
08:38 glusterbot kshlm: I do not know about 'does.', but I do know about these similar topics: 'docs'
08:39 djgerm glusterbot fstab
08:39 kshlm glusterbot++
08:39 glusterbot kshlm: glusterbot's karma is now 9
08:39 djgerm awww, glusterbot doesn't listen to poser like me
08:39 Akee joined #gluster
08:40 kshlm djgerm, You might have a little more luck with Ubuntu issues around the US time zone. People who've used it in that environment should be around then.
08:40 ahino joined #gluster
08:46 itisravi_ joined #gluster
08:46 jtux1 joined #gluster
08:46 ChrisHolcombe joined #gluster
08:46 karthik__ joined #gluster
08:47 shyam2 joined #gluster
08:47 kovshenin joined #gluster
08:48 Apeksha_ joined #gluster
08:48 _ndevos_ joined #gluster
08:49 abyss^_ joined #gluster
08:49 kshlm [diablo], https://bugzilla.redhat.com​/show_bug.cgi?id=1260007#c3
08:49 glusterbot Bug 1260007: high, unspecified, ---, rhs-bugs, NEW , glusterd tries to start before network is online  and fails to start on RHGS3.1.1 nodes based on RHEL7 after a reboot
08:50 kshlm This comment tries to explain why the services start in a particular order. And what could be done to solve it with systemd.
08:52 * [diablo] pats glusterbot on the head
08:52 kshlm rhel7.2 has the required version of systemd with support for `x-systemd.requires` mount option.
08:53 cliluw joined #gluster
08:57 djgerm glusterbot quorum
08:57 djgerm dang it!
08:57 kshlm glusterbot help
08:57 glusterbot kshlm: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
08:57 rastar joined #gluster
08:57 kshlm glusterbot docs
08:57 glusterbot kshlm: The Gluster Documentation is at https://gluster.readthedocs.org/en/latest/
08:58 * kshlm doesn't know how to work glusterbot
08:58 kshlm glusterbot list
08:58 glusterbot kshlm: Admin, Alias, Anonymous, Bugzilla, Channel, ChannelStats, Conditional, Config, Dict, Factoids, Google, Herald, Karma, Later, MessageParser, Misc, Network, NickCapture, Note, Owner, Plugin, PluginDownloader, Reply, Seen, Services, String, Topic, Trigger, URL, User, Utilities, and Web
08:58 kshlm glusterbot help factoids
08:58 glusterbot kshlm: Error: There is no command "factoids". However, "Factoids" is the name of a loaded plugin, and you may be able to find its provided commands using 'list Factoids'.
08:58 kshlm glusterbot help Factoids
08:58 glusterbot kshlm: Error: There is no command "factoids". However, "Factoids" is the name of a loaded plugin, and you may be able to find its provided commands using 'list Factoids'.
08:58 kshlm glusterbot list Factoids
08:58 glusterbot kshlm: alias, change, forget, info, learn, lock, random, rank, search, unlock, and whatis
08:58 djgerm thanks :)
08:59 anoopcs @learn
08:59 glusterbot anoopcs: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
08:59 kshlm anoopcs, do you know how to make glusterbot list facts it knows?
08:59 kshlm @help
08:59 glusterbot kshlm: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
09:00 * kshlm is giving up and heading out for lunch.
09:00 anoopcs @Not really. We can teach using learn
09:00 anoopcs kshlm, ^^
09:01 haomaiwang joined #gluster
09:03 [diablo] kshlm, hmmm so it's not fixed
09:04 kshlm [diablo], Nope. But if you are running 7.2 you can try the suggested changes to glusterd.service and fstab.
09:04 kshlm I have to go out for lunch, I'll be back in about half an hour.
09:05 mhulsman1 joined #gluster
09:05 [diablo] cheers kshlm
09:06 mhulsman joined #gluster
09:08 Slashman joined #gluster
09:09 anoopcs @factoids rank
09:09 glusterbot anoopcs: #1 pasteinfo (305), #2 extended attributes (236), #3 extended attributes (225), #4 glossary (219), #5 hostnames (180), #6 meh (153), #7 mount server (130), #8 php (123), #9 repair (119), #10 stripe (108), #11 node (102), #12 Joe's blog (83), #13 latest (83), #14 nfs (67), #15 php (66), #16 puppet (58), #17 brick order (49), #18 volunteer (48), #19 semiosis tutorial (46), #20 hack (43)
09:10 ndevos @random
09:10 glusterbot ndevos: Error: The command "random" is available in the Dict and Factoids plugins.  Please specify the plugin whose command you wish to call by using its name as a command before "random".
09:11 glafouille joined #gluster
09:12 javi404 joined #gluster
09:13 dusmant joined #gluster
09:18 anoopcs @factoids random
09:20 fgd Hi #gluter! Reporting back with my failed node problem (I have a 2x2 replicated volume, 2 nodes, 2 bricks per node). I've "replaced" a failed node while  keeping its IP (https://support.rackspace.com/how-to/recover​-from-a-failed-server-in-a-glusterfs-array/). When the gluster deamon is started volume healing is initiated (looks like it works fine) but reading/writing to the volume from clients halts, until I stop the daemon on the new node. Any ideas how
09:20 fgd to solve this?
09:20 glusterbot Title: Recover from a failed server in a GlusterFS array (at support.rackspace.com)
09:28 kotreshhr joined #gluster
09:31 jiffin joined #gluster
09:33 djgerm kshlm: was there any other UBuntu specific bugs filed for this auto mount issue?
09:33 djgerm oh! automount!
09:34 djgerm yeah…. autofs might do the trick....
09:35 Bhaskarakiran joined #gluster
09:38 gildub joined #gluster
09:39 aravindavk joined #gluster
09:39 djgerm there you go
09:39 djgerm http://blog.gluster.org/2014/04/conf​iguring-autofs-for-glusterfs-3-5-2/
09:39 glusterbot Title: Configuring autofs for GlusterFS 3.5 | Gluster Community Website (at blog.gluster.org)
09:39 jww joined #gluster
09:39 jww Hello.
09:39 glusterbot jww: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:40 jww pfft
09:40 djgerm ha! you read me mind!
09:46 atalur joined #gluster
09:46 jww so I'm running glusterfs on 2 servers, and I use some java program that find the number of pages in all pdf in a directory.I noticed that it was a lot slower on gluster.some metrics showed that the java program go 3X faster when on local disk. does somebody have advices or tips ?
09:48 djgerm hmm no… have you done any performance tuning so far? like… disk alignment, XFS inode size, uh… stuff like that?
09:49 jww no I did not.
09:51 djgerm ah. hmm I dont have a guide handy… but there's probably all sorts of little tweaks and such… someone more advanced would likely know what the most bang for your buck changes would be.
09:51 djgerm but there's quite a few out there…
09:53 djgerm so there is hope for your performance concerns!
09:53 djgerm how about latency? I heard latency can will tax your performance
09:56 jww I think latency is ok, both server are at OVH's datacenter with 10GB network
09:58 baojg_ joined #gluster
10:01 DV__ joined #gluster
10:01 haomaiwa_ joined #gluster
10:02 gildub joined #gluster
10:10 bitpushr joined #gluster
10:14 jiffin1 joined #gluster
10:19 dusmant joined #gluster
10:19 shyam joined #gluster
10:20 shyam1 joined #gluster
10:22 gildub joined #gluster
10:22 shyam left #gluster
10:40 jiffin1 joined #gluster
10:42 Manikandan joined #gluster
10:44 renout_away joined #gluster
10:45 mhulsman left #gluster
10:46 mhulsman joined #gluster
10:49 atalur joined #gluster
10:50 pg joined #gluster
10:52 glafouille joined #gluster
10:56 Manikandan_ joined #gluster
10:58 purpleidea joined #gluster
11:01 karnan joined #gluster
11:01 haomaiwang joined #gluster
11:07 kanagaraj joined #gluster
11:07 ctria joined #gluster
11:14 renout_away joined #gluster
11:15 Wizek joined #gluster
11:16 baojg joined #gluster
11:22 harish_ joined #gluster
11:27 dusmant joined #gluster
11:29 kotreshhr joined #gluster
11:34 jiffin1 joined #gluster
11:40 itisravi joined #gluster
11:40 itisravi joined #gluster
11:41 aravindavk joined #gluster
11:45 kshlm joined #gluster
11:45 B21956 joined #gluster
11:46 shyam joined #gluster
11:48 B21956 joined #gluster
12:01 haomaiwa_ joined #gluster
12:06 mhulsman1 joined #gluster
12:07 mhulsman joined #gluster
12:09 pg joined #gluster
12:11 nottc joined #gluster
12:12 jiffin1 joined #gluster
12:16 drankis joined #gluster
12:18 nbalacha joined #gluster
12:20 kotreshhr joined #gluster
12:21 nbalacha joined #gluster
12:24 ira joined #gluster
12:28 arcolife joined #gluster
12:30 johnmilton joined #gluster
12:33 atinm joined #gluster
12:39 fedele left #gluster
12:46 shubhendu joined #gluster
12:49 Saravanakmr joined #gluster
12:52 unclemarc joined #gluster
12:55 Apeksha joined #gluster
13:01 haomaiwa_ joined #gluster
13:08 atinm joined #gluster
13:10 kdhananjay joined #gluster
13:11 nekrodesk joined #gluster
13:16 nekrodesk joined #gluster
13:17 pg joined #gluster
13:18 nekrodesk joined #gluster
13:19 nishanth joined #gluster
13:20 nekrodesk joined #gluster
13:20 karnan joined #gluster
13:22 nekrodesk joined #gluster
13:26 nekrodesk joined #gluster
13:27 bennyturns joined #gluster
13:30 nekrodesk joined #gluster
13:33 nekrodesk joined #gluster
13:37 chirino joined #gluster
13:41 [diablo] hey guys, anyone running FreeBSD with GlusterFS?
13:42 csaba joined #gluster
13:47 rwheeler joined #gluster
13:48 shyam joined #gluster
13:48 csim [diablo]: there is a jenkins builder to verify it build, but nothing more
13:52 [diablo] hi csim ah OK
13:54 [diablo] sadly theres no freebsd port
13:55 chirino joined #gluster
13:55 bluenemo joined #gluster
13:58 gem joined #gluster
14:00 chirino joined #gluster
14:01 haomaiwa_ joined #gluster
14:17 chirino_m joined #gluster
14:23 nishanth joined #gluster
14:25 harish joined #gluster
14:26 harish joined #gluster
14:29 robb_nl joined #gluster
14:34 kshlm joined #gluster
14:36 chirino joined #gluster
14:39 skylar joined #gluster
14:43 farhoriz_ joined #gluster
14:45 vmallika joined #gluster
14:52 julim joined #gluster
14:53 theron joined #gluster
14:56 johnmilton joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 robb_nl joined #gluster
15:08 jwang_ joined #gluster
15:11 plarsen joined #gluster
15:12 [Enrico] joined #gluster
15:13 farhoriz_ joined #gluster
15:15 Guest84596 joined #gluster
15:16 fgd (repost) Hi #gluster! Reporting back with my failed node problem (I have a 2x2 replicated volume, 2 nodes, 2 bricks per node). I've "replaced" a failed node while  keeping its IP (https://support.rackspace.com/how-to/recover​-from-a-failed-server-in-a-glusterfs-array/). When the gluster deamon is started volume healing is initiated (looks like it works fine) but reading/writing to the volume from clients halts, until I stop the daemon on the new node. Any
15:16 fgd ideas how to solve this?
15:16 glusterbot Title: Recover from a failed server in a GlusterFS array (at support.rackspace.com)
15:18 nbalacha joined #gluster
15:22 kotreshhr left #gluster
15:23 armyriad joined #gluster
15:30 Melamo joined #gluster
15:31 wushudoin joined #gluster
15:39 ahino1 joined #gluster
15:40 Manikandan joined #gluster
15:44 farhoriz_ joined #gluster
16:00 hamiller joined #gluster
16:01 wistof joined #gluster
16:01 haomaiwang joined #gluster
16:02 p8952 joined #gluster
16:02 ws2k3 joined #gluster
16:02 v12aml joined #gluster
16:03 xMopxShell joined #gluster
16:05 dmnchild joined #gluster
16:05 atinm joined #gluster
16:05 yosafbridge joined #gluster
16:07 jww joined #gluster
16:08 p8952 joined #gluster
16:09 ivan_rossi left #gluster
16:14 calavera joined #gluster
16:15 ccha2 joined #gluster
16:17 chirino joined #gluster
16:20 cpetersen joined #gluster
16:21 cpetersen JoeJulian: why do you (and perhaps the industry) prefer other products over VMware regardless of price?
16:24 calavera_ joined #gluster
16:26 calavera joined #gluster
16:30 calavera_ joined #gluster
16:31 wushudoin joined #gluster
16:32 skoduri joined #gluster
16:38 drankis joined #gluster
16:42 scubacuda joined #gluster
16:44 fyxim joined #gluster
16:45 ahino joined #gluster
16:48 dlambrig_ joined #gluster
16:50 Ethical2ak joined #gluster
16:54 frankS2 joined #gluster
16:56 nekrodesk joined #gluster
16:58 coredump joined #gluster
17:00 nekrodesk joined #gluster
17:01 haomaiwa_ joined #gluster
17:05 axcss joined #gluster
17:07 axcss how to fix malformed internal link in gluster 3.6 ubuntu
17:10 JoeJulian cpetersen: openness and trust. Why do I hang out here and share everything I know with the world? It's the same mindset.
17:11 cpetersen So fundamentally, VMware being closed to the world is the problem.  What about featureset technically?
17:11 rcampbel3 joined #gluster
17:12 JoeJulian [diablo]: ask on the gluster-users mailing list. Emmanuel should be able to help you with FreeBSD.
17:13 luizcpg joined #gluster
17:15 nekrodesk joined #gluster
17:15 JoeJulian A long time ago when I tested it, it just didn't perform as well as either kvm or xen.
17:16 cpetersen Sorry no flag flying intended.  VMware is just what I have most of my experience in.  What about features like HA, resource pooling, DRS and vMotion?
17:16 deniszh joined #gluster
17:17 JoeJulian "vmotion" is "live migration"
17:17 cpetersen Yeah I can understand performance not being comparable due to the lack of access to a lot of back-end code.
17:19 nekrodesk joined #gluster
17:19 cpetersen Damn, I sound like such a skeptic today.  I truly want to get in to being able to deploy and support KVM/oVirt/Openstack (eventually), I'm just so new to it.  The pay off in the end is massive I am just trying to evaluate if the R&D time is worth it for me.
17:19 JoeJulian I think "DRS" in vmware is "Heat" in openstack.
17:21 JoeJulian Also comparing with openstack since I don't have much ovirt experience, resource pooling would be handled with projects in openstack.
17:22 JoeJulian What does vmware do that's special regarding "HA"?
17:23 cpetersen I'm not really sure if it's special.  I'm trying to figure out if it is.  :)
17:24 cpetersen It requires two heartbeat shares and shared storage.  It detects failures in application, OS or hardware and moves your VM from one machine to another.
17:24 cpetersen Nothing too unique right?
17:24 JoeJulian Not really. I'm mostly sure Heat can do that, too.
17:26 cpetersen I haven't traditionally been a Linux SA in the past.  Lots of VMware (Linux, kinda..) and Windows.  Openstack intimidates me quite a bit.  Enough where I feel like I need to spend countless nights at home doing R&D before I can even think of proposing it to executives.  lol
17:28 cpetersen Gluster/Ganesha with Centos has been really fun.
17:28 cpetersen Stressful, but fun.
17:28 cpetersen Split-brain almost gives me a heart attack.
17:29 JoeJulian Yeah, split-brain is best to avoid. :)
17:30 squizzi joined #gluster
17:30 cpetersen It's also a hard sell.  Customers are willing to spend for VMware because of the market presence they have.  I do small systems of not more than a few servers at a time per site, so it's a super hard sell when introducing any kind of dynamic redundancy.
17:30 JoeJulian And I agree about OpenStack's intimidation factor. It's huge and full of potential aggrivation.
17:31 calavera joined #gluster
17:31 JoeJulian And the sell is getting easier. More and more companies are choosing to migrate away, according to industry reports.
17:32 axcss sorry newbie here. is there a better place to get a little help with gluster?
17:32 cpetersen How is the automation factor VS something like VMware?  SNMP is also very accessible with VMware and it's integration with vendor hardware.
17:33 cpetersen Do I have to check-in every week?  Two weeks?  Month?  Will I find something that went randomly wrong?
17:34 calavera_ joined #gluster
17:36 nekrodesk joined #gluster
17:36 JoeJulian axcss: imho, no. This is the best possible place.
17:36 JoeJulian But there is also a ,,(mailing-list)
17:36 glusterbot I do not know about 'mailing-list', but I do know about these similar topics: 'mailing list', 'mailing lists', 'mailinglist', 'mailinglists'
17:36 JoeJulian @mailing list
17:36 glusterbot JoeJulian: the gluster general discussion mailing list is gluster-users, here: http://www.gluster.org/mail​man/listinfo/gluster-users
17:38 JoeJulian cpetersen: Most people use nagios for monitoring.
17:38 ovaistariq joined #gluster
17:39 cpetersen NAgios vs PRTG?
17:39 calavera joined #gluster
17:39 axcss JoeJulian: thanks. Can anybody help or point in right direction? Our webservers have slowed to a crawl.
17:39 purpleidea joined #gluster
17:39 purpleidea joined #gluster
17:41 ovaistar_ joined #gluster
17:45 calavera_ joined #gluster
17:48 axcss Anyone else dealing with malformed internal link? I read it was fixed in this version
17:51 calavera joined #gluster
17:54 calavera_ joined #gluster
17:57 cpetersen_ joined #gluster
18:00 calavera joined #gluster
18:01 haomaiwa_ joined #gluster
18:05 bennyturns joined #gluster
18:14 CyrilPeponnet @axcss what do you mean by malformed internal link?
18:15 JoeJulian axcss: What changed? Are there any self-heals happening? What's a malformed internal link?
18:17 unlaudable joined #gluster
18:42 ashiq joined #gluster
18:44 calavera_ joined #gluster
18:51 cpetersen joined #gluster
18:53 xMopxShell joined #gluster
18:55 calavera joined #gluster
18:57 B21956 joined #gluster
19:01 haomaiwa_ joined #gluster
19:07 purpleidea joined #gluster
19:07 purpleidea joined #gluster
19:20 skylar joined #gluster
19:22 skylar joined #gluster
19:23 plarsen joined #gluster
19:34 ovaistariq joined #gluster
19:36 calavera joined #gluster
19:37 LDA joined #gluster
19:41 ovaistar_ joined #gluster
19:45 calavera joined #gluster
19:46 nekrodesk joined #gluster
19:50 calavera_ joined #gluster
19:51 ahino joined #gluster
19:53 nekrodesk joined #gluster
20:00 drankis joined #gluster
20:01 haomaiwang joined #gluster
20:13 petan joined #gluster
20:14 ovaistariq joined #gluster
20:20 jbrooks joined #gluster
20:23 calavera joined #gluster
20:37 calavera joined #gluster
20:38 jbrooks joined #gluster
20:46 calavera_ joined #gluster
20:51 cpetersen joined #gluster
20:53 _Bryan_ joined #gluster
20:55 calavera joined #gluster
20:56 _Bryan_ Can anyone shed light on what I need to do when a brick fails....I took teh node down...fixed the storage and then bring thenode back online but it is saying that it is not trusted store
21:01 calavera_ joined #gluster
21:01 haomaiwa_ joined #gluster
21:04 calavera joined #gluster
21:05 mhulsman joined #gluster
21:06 calaver__ joined #gluster
21:23 Melamo joined #gluster
21:43 skylar joined #gluster
21:53 calavera joined #gluster
22:00 nekrodesk joined #gluster
22:01 haomaiwa_ joined #gluster
22:01 JoeJulian So you lost the /var/lib/glusterd tree on that server?
22:02 JoeJulian _Bryan_: See if this helps: https://support.rackspace.com/how-to/recover​-from-a-failed-server-in-a-glusterfs-array/
22:02 glusterbot Title: Recover from a failed server in a GlusterFS array (at support.rackspace.com)
22:02 calavera_ joined #gluster
22:07 nekrodesk joined #gluster
22:07 calavera joined #gluster
22:09 johnmilton joined #gluster
22:10 calavera_ joined #gluster
22:10 johnmilton joined #gluster
22:18 Wizek joined #gluster
22:33 farhoriz_ joined #gluster
22:35 HamburgerMartyr joined #gluster
22:43 Slashman joined #gluster
22:45 calavera joined #gluster
22:46 plarsen joined #gluster
22:50 calavera_ joined #gluster
22:52 cpetersen_ joined #gluster
22:53 calavera joined #gluster
22:57 _Bryan_ joejulian: just lost a drive....replaced it and fixed array then remounted brought the server back up and tried to start the brick and ran into problem after problem
22:57 _Bryan_ eventually had to replace-brick giving it a new directory..then replace-brick to move it back to original directory and finally got the brick to start up and start healing..
22:57 _Bryan_ I miss the days of 3.2 jsut starting to heal when I ran find...after fixing a failed node
22:58 calavera_ joined #gluster
23:01 haomaiwa_ joined #gluster
23:03 gildub joined #gluster
23:04 calavera joined #gluster
23:07 calavera_ joined #gluster
23:10 calavera joined #gluster
23:19 JoeJulian Yeah, you didn't have to do it that way.
23:19 JoeJulian Did you miss the days of 3.2 when your mount failed and replica filled up your root disk? ;)
23:20 djgerm left #gluster
23:24 cpetersen__ joined #gluster
23:32 theron joined #gluster
23:39 calavera_ joined #gluster
23:45 calavera joined #gluster
23:53 JoeJulian @split-brain
23:53 glusterbot JoeJulian: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0​/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/
23:56 calavera_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary