Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 dylan_ joined #gluster
00:04 bgpepi joined #gluster
00:12 badone joined #gluster
00:12 mattapp__ joined #gluster
00:13 mattappe_ joined #gluster
00:33 T0aD- joined #gluster
00:37 dbruhn joined #gluster
00:38 GLHMarmot joined #gluster
00:46 hchiramm_ joined #gluster
00:49 psyl0n joined #gluster
01:03 _BryanHm_ joined #gluster
01:18 elyograg attempting an in-place upgrade from 3.3.1 to 3.4.1 on my server has resulted in a LOT of files being unreachable and returning "Input/output error" with find.
01:18 elyograg testbed servers.
01:20 a2 what do the client logs say?
01:24 elyograg from a fuse client that can't see anything at all: [2013-12-03 18:22:20.260610] E [dht-common.c:1372:dht_lookup] 5-testvol-dht: Failed to get hashed subvol for /one/RTR1
01:24 vpshastry joined #gluster
01:25 vpshastry left #gluster
01:26 a2 elyograg, looks like some bricks are not connected to
01:28 elyograg via nfs, i can see partial directory listings and I'm seeing this at the moment I do the ls:  http://fpaste.org/58827/20475138/
01:29 a2 ah you were getting IO errors through NFS, not FUSE?
01:29 elyograg I upgraded two of the brick servers and rebooted them.  one at a time.
01:30 a2 looks like either the bricks are not up (check with gluster volume status) or the client (and nfs server) is not able to connect to them
01:30 elyograg via fuse, I see: find: `./one/RTR1': Invalid argument
01:30 elyograg via NFS, find: `./one/RTR1/rtrphotosfour/docs/748': Input/output error
01:32 elyograg http://fpaste.org/58828/12071813/
01:32 a2 check gluster volume status to see if bricks are really started
01:32 elyograg I upgraded testb3 first, then happened to notice that I was having problems just before I rebooted testb4.
01:33 elyograg after I'd already done the yum.
01:33 elyograg CentOS 6.
01:33 elyograg gluster volume status is there in that last paste.
01:34 glusterbot joined #gluster
01:36 davidbierce joined #gluster
01:41 chirino joined #gluster
01:45 glusterbot joined #gluster
01:48 bennyturns joined #gluster
01:49 elyograg I need to get headed home, so going AFK, but will see everything later. More info: http://fpaste.org/58829/12171113/
01:49 glusterbot Title: #58829 Fedora Project Pastebin (at fpaste.org)
02:06 harish joined #gluster
02:15 glusterbot New news from resolvedglusterbugs: [Bug 764339] Fileop fails when quota is enabled <http://goo.gl/mP8lcc>
02:15 glusterbot New news from newglusterbugs: [Bug 1037267] network disconnect/reconnect does not resume data access to server <http://goo.gl/zpKl58> || [Bug 1005616] glusterfs client crash (signal received: 6) <http://goo.gl/QxeOVY> || [Bug 985946] volume rebalance status outputting nonsense <http://goo.gl/KXVkFT> || [Bug 1030580] Feature request (CLI): Add an option to the CLI to fetch just incremental or cumulative I/O statistics <ht
02:16 mattapp__ joined #gluster
02:17 jag3773 joined #gluster
02:22 davidbierce joined #gluster
02:22 _pol joined #gluster
02:30 jbd1 joined #gluster
02:31 cogsu joined #gluster
02:31 gdubreui joined #gluster
03:05 kshlm joined #gluster
03:12 sgowda joined #gluster
03:15 _polto_ joined #gluster
03:16 micu1 joined #gluster
03:18 johnmwilliams joined #gluster
03:18 sysconfig joined #gluster
03:19 compbio joined #gluster
03:19 AndreyGrebenniko joined #gluster
03:19 VeggieMeat joined #gluster
03:20 gdubreui joined #gluster
03:38 shubhendu joined #gluster
03:39 JoeJulian Interesting... #supybot-bots would make an excellent place to test your exploit of supybot I suppose... I forgot glusterbot was still in that channel.
03:40 JoeJulian 46gb of exception backtraces logged in the last 24 hours. <sigh>
03:42 semiosis @qa releases
03:42 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
03:42 semiosis hey!
03:42 semiosis glusterbot: whoami
03:42 glusterbot semiosis: I don't recognize you.
03:42 semiosis @seen JoeJulian
03:42 glusterbot semiosis: JoeJulian was last seen in #gluster 2 minutes and 29 seconds ago: <JoeJulian> 46gb of exception backtraces logged in the last 24 hours. <sigh>
03:42 semiosis JoeJulian: thanks!!!
03:45 semiosis JoeJulian: someone tried to exploit glusterbot?
03:45 semiosis glusterbot: version
03:45 glusterbot semiosis: The current (running) version of this Supybot is 0.83.4.1+limnoria 2013-12-03T05-44-50, running on Python 2.6.6 (r266:84292, Jul 10 2013, 22:48:45)  [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]. The newest versions available online are 2013-12-03T05-44-50 (in testing), 2013-12-03T05-44-50 (in master).
03:50 semiosis hmm, apparently i dropped in 2 mins after JoeJulian but looks like even that was too late to catch him
03:54 saurabh joined #gluster
03:54 itisravi joined #gluster
03:54 semiosis :O
03:56 mistich joined #gluster
03:57 mistich have a quick question I have a 8 node cluster when I mount the clients should I mount them all to the 1st node or spread them out. also does this gain anything if I spread them out
03:58 semiosis mistich: ,,(mount server)
03:58 glusterbot mistich: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
03:59 semiosis mistich: using FUSE or NFS clients?
03:59 mistich fuse
04:00 semiosis then you're fine.  the only think you might want to consider is how the clients will mount if that mount server happens to be down
04:00 semiosis in that case, see ,,(rrdns)
04:00 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/Kc9MVc
04:00 mistich ok tanks
04:00 semiosis yw
04:01 shireesh joined #gluster
04:01 mistich 404: Page not found – the page  http://goo.gl/Kc9MVc  does not exist.
04:02 mistich I understand but wanted to let you know the link does not work
04:04 semiosis ooh
04:04 ndarshan joined #gluster
04:04 semiosis i found this googling: http://edwyseguru.wordpress.com/2012/01/09/usin​g-rrdns-to-allow-mount-failover-with-glusterfs/
04:04 glusterbot <http://goo.gl/Kc9MVc> (at edwyseguru.wordpress.com)
04:04 semiosis @forget rrdns
04:04 glusterbot semiosis: The operation succeeded.
04:05 semiosis @learn rrdns You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://edwyseguru.wordpress.com/2012/01/09/usin​g-rrdns-to-allow-mount-failover-with-glusterfs/
04:05 glusterbot semiosis: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
04:05 semiosis @learn rrdns as You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://edwyseguru.wordpress.com/2012/01/09/usin​g-rrdns-to-allow-mount-failover-with-glusterfs/
04:05 glusterbot semiosis: The operation succeeded.
04:05 semiosis @rrdns
04:05 glusterbot semiosis: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/Kc9MVc
04:05 raghu joined #gluster
04:05 semiosis ah, a problem with the URL shortener!
04:05 mistich :)
04:06 mistich fyi almost have gluster working with rrds
04:06 semiosis @forget rrdns
04:06 glusterbot semiosis: The operation succeeded.
04:06 semiosis @learn rrdns as You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
04:06 glusterbot semiosis: The operation succeeded.
04:06 semiosis @rrdns
04:06 glusterbot semiosis: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
04:06 semiosis there we go :)
04:07 semiosis mistich: glad to hear it.  we were talking about rrdns in here earlier today... none of us (present at the time) had tried it
04:07 semiosis i assume it must work because glusterbot tells us about it, and glusterbot knows all
04:07 semiosis mistich: please share your experience using rrdns!
04:07 semiosis would love to hear about it
04:08 shyam joined #gluster
04:08 mistich I will do a write up when 100% working
04:08 semiosis hey, that's great!  do you have a place to publish it?
04:08 semiosis if not, feel free to add a page in the HOWTO section of the gluster.org wiki
04:08 mistich also once the new rrd comes out with rrdcached working with rrdcreate will even work better
04:08 mistich will do
04:09 semiosis if you publish elsewhere, at least add a link to our wiki please
04:09 semiosis thanks!
04:09 mistich np in the main code branch o
04:09 mistich np
04:09 semiosis the new rrd?  whats that?  never heard of rrdcached/rrdcreate
04:09 semiosis you talking about a new gluster feature??
04:10 mistich rrdcached is a deamon that allows you to cache the writes an spreads it out to many threads
04:10 semiosis btw, gluster.org HOWTO index: http://www.gluster.org/communit​y/documentation/index.php/HowTo
04:10 glusterbot <http://goo.gl/YvpKi2> (at www.gluster.org)
04:10 semiosis mistich: ooh, rrd as in rrdtool, not rrdns
04:10 semiosis cool!
04:10 mistich :)
04:11 semiosis @learn howtos as gluster.org HOWTO index: http://goo.gl/2RWo8M
04:11 glusterbot semiosis: The operation succeeded.
04:11 mistich can update about 75000 rrds from one gluster thread
04:11 semiosis sweet
04:12 semiosis but i gotta ask... why rrd?  why not graphite/whisper?
04:12 mistich yeah would like to do more but bottleneck somewhere in fuse client
04:12 mistich rrd is the datastore zenoss uses for now
04:12 semiosis ah
04:12 mistich they are moving to a db backend in the next version
04:14 mistich is there a good doc on telling when you fuse client is overwhelmed  and one on when to add more nodes
04:14 semiosis well i'm sure they know what they're doing :)
04:14 semiosis hmmm good question
04:14 semiosis none that i know of
04:14 mistich ok just checking
04:14 semiosis most often the bottleneck isn't the fuse client itself, but the network or the brick disks
04:15 mistich yeah I like rrds but will glad they move to database
04:15 mistich yeah I have 10 gig network so not hitting that
04:15 semiosis "the network" could mean actual switches &c or the NIC on the server(s)
04:15 mistich I can get 9.5 gig to the gluster servers so know that is not it
04:15 anands joined #gluster
04:15 semiosis less common, when you have high perf net work & disk, the fuse client can actually consume substantial CPU (or so I've heard)
04:16 semiosis s/net work/network/
04:16 glusterbot What semiosis meant to say was: [histsearch network]
04:16 semiosis glusterbot: thx
04:16 glusterbot semiosis: you're welcome
04:16 mistich yeah I have seen high cpu on the fuse client
04:17 mistich brick disk are ssds so not a high io wait on them
04:17 semiosis whats the network latency (RTT) between client & server machine?
04:17 semiosis i've heard 10gbE has same latency as gigE
04:17 basic` joined #gluster
04:17 mistich rtt min/avg/max/mdev = 0.341/0.355/0.369/0.014 ms
04:18 basic` joined #gluster
04:18 semiosis so, thats not great.  decent, but not great
04:18 semiosis and for lots of tiny writes like I imaging you have with RRDs, may be a bottleneck
04:18 semiosis imagine*
04:19 * semiosis had a couple beers, typing imparied more than thinking
04:20 mistich yeah but with rrdcached dumps the writes all at once
04:20 semiosis can you try tuning that?
04:20 semiosis fewer, larger writes
04:20 hchiramm_ joined #gluster
04:21 mistich yes
04:22 mistich still testing and tuning thats why I said almost have it working still want to tune 10gig nics more etc....
04:22 shireesh joined #gluster
04:27 GLHMarmot left #gluster
04:33 mkzero joined #gluster
04:35 mistich left #gluster
04:36 vshankar joined #gluster
04:36 sgowda joined #gluster
04:53 MiteshShah joined #gluster
04:54 satheesh1 joined #gluster
04:58 shylesh joined #gluster
05:16 glusterbot` joined #gluster
05:17 VeggieMeat_ joined #gluster
05:18 rwheeler_ joined #gluster
05:18 wgao_ joined #gluster
05:18 bulde joined #gluster
05:22 Norky_ joined #gluster
05:22 eryc joined #gluster
05:22 mjrosenb joined #gluster
05:22 ultrabizweb joined #gluster
05:22 eryc joined #gluster
05:22 jiffe99 joined #gluster
05:22 GabrieleV joined #gluster
05:24 bala joined #gluster
05:26 Rydekull joined #gluster
05:33 mohankumar joined #gluster
05:38 bala joined #gluster
05:38 vpshastry1 joined #gluster
05:39 MrNaviPacho joined #gluster
05:39 skered- ls
05:39 skered- opps
05:40 psharma joined #gluster
05:42 shylesh joined #gluster
05:45 bulde joined #gluster
05:46 ricky-ti1 joined #gluster
05:48 hagarth joined #gluster
05:49 nshaikh joined #gluster
05:50 dylan_ joined #gluster
05:54 davinder joined #gluster
05:57 tziOm joined #gluster
06:00 anands joined #gluster
06:11 krypto joined #gluster
06:20 tziOm joined #gluster
06:24 rastar joined #gluster
06:25 tziOm joined #gluster
06:40 MiteshShah joined #gluster
06:41 MrNaviPacho joined #gluster
06:52 kshlm joined #gluster
06:54 hagarth joined #gluster
07:02 XpineX joined #gluster
07:12 ctria joined #gluster
07:19 anands joined #gluster
07:20 jtux joined #gluster
07:26 dusmant joined #gluster
07:40 shylesh joined #gluster
07:53 lalatenduM joined #gluster
07:57 ngoswami joined #gluster
07:58 muhh joined #gluster
08:09 ndarshan joined #gluster
08:11 shylesh joined #gluster
08:11 geewiz joined #gluster
08:11 getup- joined #gluster
08:15 ababu joined #gluster
08:34 keytab joined #gluster
08:36 hchiramm_ joined #gluster
08:40 davinder joined #gluster
08:41 satheesh1 joined #gluster
08:43 mohankumar joined #gluster
08:43 dbruhn joined #gluster
08:45 lanning joined #gluster
08:49 onny joined #gluster
08:49 Dga joined #gluster
08:49 nshaikh joined #gluster
08:52 verywiseman i have 4 servers , each have 8 GB brick , and i created Striped volume , so i have 32 GB volume , and i mounted that volume in 5th server , when i copy large file (9GB) , this error message appear "cp: writing `./bigfile': Input/output error" , why?
08:59 mohankumar joined #gluster
08:59 hagarth joined #gluster
09:01 shyam joined #gluster
09:03 gluslog_ joined #gluster
09:04 schrodinger_ joined #gluster
09:04 Comnenus_ joined #gluster
09:05 _polto_ joined #gluster
09:05 _polto_ joined #gluster
09:06 ninkotech_ joined #gluster
09:06 jiphex_ joined #gluster
09:08 StarBeast joined #gluster
09:18 nullck joined #gluster
09:18 tjikkun joined #gluster
09:18 fidevo joined #gluster
09:18 baoboa joined #gluster
09:19 calum_ joined #gluster
09:20 osiekhan joined #gluster
09:24 satheesh2 joined #gluster
09:25 harish joined #gluster
09:27 shylesh joined #gluster
09:29 nshaikh joined #gluster
09:44 StarBeast joined #gluster
09:51 satheesh joined #gluster
10:06 Staples84 joined #gluster
10:07 hagarth joined #gluster
10:20 dannyroberts joined #gluster
10:21 dannyroberts I have a gluster volume being mounted by gluster-fuse via /etc/fstab across 6 servers as /var/lib/nova/instances and each time it is mounted the directory ownership is set to root:root, but I need it as nova:nova or the OpenStack Nova service trying to sue it will fail. Is there any wway I can set the user ownership on  mount?
10:23 rjoseph joined #gluster
10:25 rjoseph @paste
10:25 glusterbot rjoseph: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
10:30 samppah dannyroberts: does it help if you set following options storage.owner-uid <novauid> and storage.owner-gid <novagid>
10:42 dannyroberts samppah: posibly, i'll give it a go, thanks
10:47 hchiramm_ joined #gluster
10:52 FooBar verywiseman: size of a single brick is the maximum filesize
10:53 FooBar files are not split over multiple bricks
10:57 dannyroberts joined #gluster
11:00 shylesh joined #gluster
11:06 hchiramm_ joined #gluster
11:14 psyl0n joined #gluster
11:14 vpshastry1 joined #gluster
11:22 mohankumar joined #gluster
11:24 diegows joined #gluster
11:25 hchiramm_ joined #gluster
11:28 rastar joined #gluster
11:51 getup- joined #gluster
12:02 anands joined #gluster
12:11 itisravi joined #gluster
12:15 rastar joined #gluster
12:22 hchiramm_ joined #gluster
12:28 nshaikh joined #gluster
12:41 edward1 joined #gluster
12:43 psyl0n joined #gluster
12:47 glusterbot New news from newglusterbugs: [Bug 1038103] Split-brain *.dat <http://goo.gl/RQ2qhx>
12:54 mkzero has anybody else experienced something like 'PHP Warning:  error_log(/client/path/to/file): failed to open stream: Read-only file system in /client/path/to/php/file' on gluster? after that message I also get a 'Error writing file /client/path/to/file. Please check disc space.' which is not the disk-space(testing cluster, nearly empty at the moment)
12:55 JonnyNomad joined #gluster
12:56 mkzero it gets even stranger: after I remounted the client, the error is gone.. this happened to me twice already, once in prod and once in testing
13:00 JonnyNomad joined #gluster
13:01 hchiramm_ joined #gluster
13:05 ricky-ti1 joined #gluster
13:07 vpshastry joined #gluster
13:12 StarBeast joined #gluster
13:14 hybrid512 joined #gluster
13:15 davidbierce joined #gluster
13:24 _BryanHm_ joined #gluster
13:25 hchiramm_ joined #gluster
13:25 psyl0n joined #gluster
13:40 calum_ joined #gluster
13:44 sheldonh joined #gluster
13:45 sheldonh can i make a replicated volume with 3 bricks, where the replica count is 2? i.e. i can lose one brick, but not two. i can't figure out how, thrashing around on the command-line and getting error messages like "Incorrect number of bricks (3) supplied for replica count (2)"
13:46 hchiramm_ joined #gluster
13:48 kkeithley ,,(repo)
13:48 glusterbot I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo'
13:51 X3NQ joined #gluster
13:52 rwheeler joined #gluster
14:02 bennyturns joined #gluster
14:03 eseyman joined #gluster
14:04 hagarth joined #gluster
14:06 mohankumar joined #gluster
14:14 dusmant joined #gluster
14:17 mkzero sheldonh: imho that's not possible with gluster. that would be a raid5-like replication but gluster currently only works like raid1 or raid0 or raid10. you would need a 4th brick. joejulian has a possible work-around on his blog: http://joejulian.name/blog/how-to-expand-gl​usterfs-replicated-clusters-by-one-server/
14:17 glusterbot <http://goo.gl/O3Fj5M> (at joejulian.name)
14:18 glusterbot New news from newglusterbugs: [Bug 1024695] [RFE] Provide RPC throttling stats <http://goo.gl/1vgHoH>
14:28 sheldonh mkzero: that's very helpful. i'll start from scratch with two bricks per server. i didn't understand how replicas are built. thanks!
14:30 qstep joined #gluster
14:31 qstep gluster doesnt start at system boot because network isnt configured yet. i am using systemctl. has anyone help?
14:31 bsaggy joined #gluster
14:35 kanagaraj joined #gluster
14:35 ndk joined #gluster
14:40 ira joined #gluster
14:40 dbruhn joined #gluster
14:41 davinder joined #gluster
14:42 satheesh1 joined #gluster
14:43 kshlm joined #gluster
14:45 samppah qstep: ubuntu?
14:45 qstep arch linux
14:46 samppah oh
14:47 qstep its totally weired: systemctl start glusterd works perfecly, finds the remote computer and runs the volume.
14:48 qstep but at boot time (systemctl enable glusterd) reports errors and crashes. same errors a thrown if no internet connection is present at that time (i.e. if I try to start it but have the LAN cable disconnected).
14:49 qstep glusterd starts at boot time if a) no peer are connected or b) one peer is connected. if I additionally add a volume replicated over these 2 peers, glusterd exits with an error at boot time.
14:50 qstep (arch linux on both machines, both updated of course)
14:52 qstep i found an about 2 year old bug report on the web, describing something similar. but no other infos. I would like to replicate my homedir between those 2 machines, so I can work using the same home on both (useful for programming 'locally' but testing the program on the remote computer, due to local hardware restrictions)
14:54 kkeithley systemd? arch linux is a Fedora clone? Or close to it?  what version of glusterfs?
14:57 _BryanHm_ joined #gluster
15:00 qstep arch and fedora should be similar. arch is rolling release and has cutting egde brand new software version (usually you get the latest kernel updates from arch repros after one day). glusterd -V shows version 3.4.1 from Nov. 4th
15:00 hagarth almost time for the gluster weekly meeting, do head to #gluster-meeting if you are interested.
15:00 FooBar sheldonh: it can be done, if you have 3 bricks per server on 3 servers
15:02 FooBar sheldonh: h1:brick1 -> h2:brick1, h3:brick1 ->, h1:brick2, h2:brick2 -> h3:brick2
15:02 FooBar (2 bricks per server, 3 servers)
15:02 kshlm qstep: When you mean gluster doesn't start, do you mean the gluster mount doesn't happen or that glusterd and the bricks don't start?
15:02 sheldonh FooBar: what i wanted was to increase capacity in exchange for increased risk. so i dropped down to replica 1 (distributed), then added one brick to make a single 2-brick replica, then added more 2-brick replicas, one at a time. so now i have "Type: Distributed-Replicate" and "Number of Bricks: 3 x 2 = 6". happy :)
15:02 FooBar yup
15:03 sheldonh FooBar: sadly, i only figured that out *after* i told the user to terminate all his instances :)
15:04 sheldonh man, this stuff is HOT!
15:04 japuzzo joined #gluster
15:05 shyam joined #gluster
15:06 clag___ left #gluster
15:12 keytab joined #gluster
15:14 qstep this is exactly what happens to me: http://www.gluster.org/pipermail/glus​ter-users/2013-September/037244.html
15:14 qstep any suggestions?
15:14 glusterbot <http://goo.gl/F6OcgB> (at www.gluster.org)
15:15 rwheeler joined #gluster
15:16 kshlm That thread was a dead end. Can you paste the glusterd logs onto someplace? That will help.
15:17 kshlm The glusterd log is /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
15:18 qstep sure. where do you want me to send the log to?
15:18 kshlm @paste
15:18 glusterbot kshlm: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
15:19 jbd1 joined #gluster
15:19 qstep im using arch linux so i use pastebinit
15:19 kshlm sure.
15:19 * kshlm uses archlinux as well
15:21 wushudoin joined #gluster
15:22 zerick joined #gluster
15:22 qstep ok. thanks. I will reboot to produce a clear error log. back soon
15:23 gmcwhistler joined #gluster
15:25 qstep joined #gluster
15:26 avati joined #gluster
15:26 shubhendu joined #gluster
15:30 qstep do you want me to post the link here?
15:31 Locane joined #gluster
15:31 kshlm yes.
15:31 kshlm if it doesn't contain anything that should be private
15:32 Locane hello. im new to gluster and i need to do use two node with shared NAS as brick. is this even possible?
15:33 hchiramm_ joined #gluster
15:34 neofob joined #gluster
15:34 qstep nope. here we go: http://pastebin.com/BhAUHXe8
15:34 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:35 qstep @paste
15:35 glusterbot qstep: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
15:38 kshlm seems like gluster is being started before we have a network up. name resolution of the bricks is failing, which is preventing glusterd initialization.
15:39 qstep if i do now: systemctl start glusterd everything works fine! but if i disconnet my LAN cable and then run 'systemctl start glusterd' it also crashes, showing the extact same error in 'systemctl status glusterd'. I didnt compare the log files for both cases, but i can post the other one as well if that helps. this problem is very annoying and it only occurs once i have a creted volme
15:39 qstep just having peers in a cluster doesn cause problems at startup
15:39 harish joined #gluster
15:40 kshlm the name resolution happens for the bricks.
15:40 qstep yes thats my feeling as well. but how to get around that? In the end I would like to mount my homedir via gluster.
15:40 kshlm so you'll only fail to start when you can't resolve addresses.
15:41 kshlm the glusterd.service file does say 'After=network.target'
15:41 qstep yes it does.
15:41 kshlm so glusterd is being started after the network is up. but resolution is still failing.
15:42 qstep changing this to
15:42 qstep After=sys-subsystem-net-devices-enp0s25.device
15:42 qstep brought no change.
15:42 qstep i think the network might be up but the computer didnt get a dhcp lease at that time already
15:43 qstep i thought about manually configuring the network....
15:43 aurigus Is it ill-recommended to use ZFS for a brick filesystem? I've done that and get this error from Gluster: Skipped fetching inode size for zfs: FS type not recommended
15:44 B21956 joined #gluster
15:45 kshlm I thought network.target reached only after an ip is obtained?
15:46 qstep dont know.
15:47 kshlm According to this http://www.freedesktop.org/wiki/​Software/systemd/NetworkTarget/ , network.target doesn't have any specific meaning from the point of systemd.
15:47 glusterbot <http://goo.gl/dwPjc5> (at www.freedesktop.org)
15:47 kshlm It is dependent on how the admin has setup the system.
15:48 samppah aurigus: iirc at least zfs on linux has some problems with extended attributes
15:49 aurigus I thought since this article was in existence zfs was ordained safe http://www.gluster.org/community/do​cumentation/index.php/GlusterOnZFS
15:49 glusterbot <http://goo.gl/dSqS9A> (at www.gluster.org)
15:49 aurigus I have 3x 3TB drives on the system so it would be perfect to have zfs handle some redundancy on those disks
15:50 kshlm qstep: How have you configured your network? Using networkmanager or netctl?
15:50 qstep kshlm ah thanks. so I need to do that.
15:52 qstep i think i used neither of those. just started dhcpd.service (by default on all interfaces)
15:52 qstep i just have one weired connection. so this was working perfectly fine
15:53 bstr Does gluster maintain its own timesource when logging? Im showing accurate time on my system, but gluster logs appear to be in the future...
15:54 kshlm dhcpcd.service backgrounds immediately after starting, which would lead to network.target being reached before an ip was obtained
15:54 social hagarth: bad news, valgrind won't catch the memleak as it is probably some fast loop and it just eats 100%cpu as it gets maloc calls + it's impossible to shut it down, one has to kill it :(
15:54 tqrst unrelated: is it normal that rm -rf $path is 10+ times slower than sshing to my 50 bricks one by one and doing 'cd /brick/mount/point; find $path | xargs -I{} rm -f /my/volume/{}'?
15:54 jag3773 joined #gluster
15:54 social hagarth: but still I have some outputs I can probably share
15:56 Locane Can anyone tell me is it possible to make gluster with two nodes that share together one NAS?
15:57 aurigus Locane: possible yes, but probably not the best-use case
15:57 hagarth social: that would be great, especially the definitely lost blocks
15:58 qstep kshlm ok yes that makes perfect sense. as a work around i might just manual configure the network, but is there a good solution?
15:59 Locane aurigus: i can belive that. but it's not up to me. i have used allmost two weeks to find how to do it and every instructions has two bricks
16:00 aurigus yes if you just have one brick (one data source) there is no need to use gluster
16:00 aurigus NAS can attach to multiple locations and if it goes down the data source is inaccessible
16:00 Locane thats what i have been thinking :F
16:00 Locane that i dont need gluster
16:01 qstep kshlm its possible to prevent dhcpd from forking to background via -w
16:01 kshlm yes, but the default service file has 'dhcpcd -b -q'
16:02 kshlm if you are fine changing the service files, you can do that.
16:02 kshlm otherwise i think it's best if you use netctl. it's simple enough to setup either a manual network config or a dhcp powered one using netctl.
16:03 zerick joined #gluster
16:04 ndk` joined #gluster
16:04 qstep i will have a look at netclt. thanks for your help. i didnt have a clue how to start fixing my problem
16:05 Locane aurigus: do you think that i can create MySQL database on NAS without Gluster and those two nodes connectec different ports on NAS can use it together? both send r/w commands
16:05 qstep btw: there is also the network-online.target it also needs to be configured manually but i think that gluster should rather depend on that
16:11 kshlm the service file provided in the gluster repo uses network-online.target. this is the service file used by fedora (i think).
16:11 kshlm so yeah, network-online.target is the way to go.
16:12 kshlm but even that depends on the services setting up your network doing the correct thing.
16:12 social hagarth: 1,6MB from all files, I'll try to make it smaller by deduping it
16:12 hagarth social: ok
16:13 daMaestro joined #gluster
16:16 social hagarth: http://paste.fedoraproject.org/58995/17366113 http://paste.fedoraproject.org/58996/17366513 http://paste.fedoraproject.org/58997/73668138 http://paste.fedoraproject.org/58998/13861736 http://paste.fedoraproject.org/58999/73673138 http://paste.fedoraproject.org/59000/73675138 http://paste.fedoraproject.org/59001/86173677 http://paste.fedoraproject.org/59002/73680138 http://paste.fedoraproject.org/59003/73683138 http://paste.fe
16:16 glusterbot Title: #58995 Fedora Project Pastebin (at paste.fedoraproject.org)
16:17 social hagarth: http://paste.fedoraproject.org/58999/73673138/ this one has highest definitely lost
16:17 glusterbot Title: #58999 Fedora Project Pastebin (at paste.fedoraproject.org)
16:17 social err no, this one :) http://paste.fedoraproject.org/59001/86173677/
16:17 glusterbot Title: #59001 Fedora Project Pastebin (at paste.fedoraproject.org)
16:26 giannello joined #gluster
16:27 bugs_ joined #gluster
16:28 nikkk i have four hosts - two servers, two clients.  why would the servers write data to a gluster volume *slower* than the clients?
16:28 nikkk it's weird
16:28 vpshastry left #gluster
16:28 vpshastry joined #gluster
16:29 hchiramm_ joined #gluster
16:31 itisravi joined #gluster
16:32 qstep joined #gluster
16:32 qstep kshlm still there?
16:33 kiwikrisp joined #gluster
16:33 qstep now i changed to network manager and enabled the systemctl target NetworkManager-wait-online
16:34 qstep this automatically makes glusterd work correctly
16:36 kiwikrisp Anyone have any information on how to configure 2 node replicate NFS volume for a "backend network for storage" as discussed in this documentation? http://gluster.org/community/documentation/​index.php/Network_Configuration_Techniques
16:36 glusterbot <http://goo.gl/90djZX> (at gluster.org)
16:38 MrNaviPacho joined #gluster
16:43 qstep left #gluster
16:43 hagarth social: definitely lost: 3,486 bytes in 208 blocks - doesn't seem much at all!
16:45 social ==27443==    definitely lost: 391,502 bytes in 2,492 blocks ?
16:45 social still small, I didn't get the main leak as I said, valgrind just died on it
16:46 hagarth damn
16:46 theron joined #gluster
16:49 social anyway let's look at http://paste.fedoraproject.org/59001/86173677/ at the bootom process 27443 is probably closest to the issue
16:49 glusterbot Title: #59001 Fedora Project Pastebin (at paste.fedoraproject.org)
16:51 hagarth social: if you look at line 2515
16:52 hagarth more than half of the definitely lost is for brickinfo structure. This is supposed to persist for the complete runtime of the process.
16:56 LoudNoises joined #gluster
16:58 social hagarth: as it's loop somewhere in locking (deducted from logs from glusterd) I'd guess it can be something small just called many times
16:58 social hagarth: something like line 2470
16:59 khushildep joined #gluster
17:00 sprachgenerator joined #gluster
17:01 ndk`` joined #gluster
17:03 rotbeard joined #gluster
17:03 devoid joined #gluster
17:04 bsaggy Hey Folks.  Currently using Gluster on two physical nodes to replicate data between, and clients connect to the nodes to access the data.  I'm migrating to a new infrastructure and don't have the need of replication with it.  Does it make sense to spin up a VM as a Gluster node just for the sake of presenting itself as a file server to its clients?
17:04 devoid joined #gluster
17:05 hagarth social: http://fpaste.org/59022/ will fix the leak in line 2470
17:05 glusterbot Title: #59022 Fedora Project Pastebin (at fpaste.org)
17:05 hagarth social: and the one in 2456 is the same too
17:07 social hagarth: 2487 is again around locking but different stuff, should I create ticket in BZ for these leaks?
17:07 avati joined #gluster
17:09 theron joined #gluster
17:12 kiwikrisp bsaggy: how are the clients connecting to the physical nodes? (native client, CIFS, NFS)
17:12 bsaggy kiwikrisp: Currently native client.
17:12 jbd1 joined #gluster
17:13 hagarth social: yes, please.
17:13 hagarth social: would you mind sending the patch for 2470?
17:18 kiwikrisp bsaggy: so if you don't need the replicative structure, what are you hoping to gain by spinning up gluster VM's? It could be done but would seem to add another layer of latency because it's stacked on top of your new storage infrastructure. If the native client connection is still required, then spinning a distributed gluster volume could possibly help to alleviate the impact of the extra virtualization layer.
17:18 social hagarth: sorry my system just died, gnome-shell + abrt is lovely pair
17:18 glusterbot New news from newglusterbugs: [Bug 1038247] Bundle Sources and javadocs in mvn build <http://goo.gl/SzSqn5>
17:18 social hagarth: I think I should just continue in https://bugzilla.redhat.co​m/show_bug.cgi?id=1032122 as the valgrind was on affected node
17:19 glusterbot <http://goo.gl/HbDvQ6> (at bugzilla.redhat.com)
17:19 glusterbot Bug 1032122: unspecified, unspecified, ---, kparthas, NEW , glusterd getting oomkilled
17:19 hagarth social: yeah, you can update that.
17:22 hagarth social: can you also try the patch I posted in fpaste?
17:24 social I'll prepare some testing env
17:25 bsaggy kiwikrisp: Good points.  The old infrastructure is a linux based storage solution, and the new will be Windows.  With my lack of experience, I'm just not sure how the linux clients will connect to a windows storage device - I think CIFS can do it, but how does that compare to Gluster?
17:25 bsaggy kiwikrisp: However, I haven't decided yet, but it may be beneficial to have a distributed volume; I may use Gluster to replicate to a DR site.
17:25 bsaggy Then again, Windows should also be able to handle the offsite replication, I would imagine.
17:28 hagarth social: thanks, i will also run some tests.
17:36 sroy_ joined #gluster
17:38 Mo__ joined #gluster
17:41 MrNaviPa_ joined #gluster
17:42 hchiramm_ joined #gluster
17:44 dylan_ joined #gluster
17:44 mkzero joined #gluster
17:44 kiwikrisp bsaggy: depending on the Windows storage device, iSCSI might be the best solution to connect the Linux clients, CIFS will work as well but has additional overhead.If going from Windows to gluster, obviously CIFS is the only choice. I'm still not clear if you have Windows or Linux clients connecting to the gluster.
17:47 bsaggy kiwikrisp: Linux clients connect to the gluster cluster.
17:48 bsaggy kiwikrisp: Interesting, so the Linux clients may connect to the Windows storage via iSCSI?  That would eliminate the need to use gluster in this new environment.  As much as I would like to use gluster, I don't want to make things overly complicated.
17:48 glusterbot New news from newglusterbugs: [Bug 1032122] glusterd getting oomkilled <http://goo.gl/HbDvQ6>
17:49 Peanut Ouch.. that's gotta hurt..
17:49 Peanut Also, that link goes to 'homeimprovement.com/bugzilla.redhat.com/', that's a bit odd.
17:51 social interesting how got the homeimprovement.com there?
17:51 partner so it did :)
17:51 Peanut Maybe a new 'feature' from goo.gl?
17:56 piffio joined #gluster
17:58 piffio hi all
17:58 piffio I'm about test gluster in our infrastructure, but I'd like to get some information I wasn't able to find online
17:58 piffio basically, we have a 100% virtualized environment, using Xenserver. The storage backend is a SAN array
17:59 piffio I'm setting up the new dev environments, and the basic use case is to share the $HOME directories across multiple dev servers for all the developers
17:59 piffio my idea was to set up 2+ storage servers using glusterfs, and then use the dev servers as gluster clients
17:59 piffio I'd like to avoid nfs
18:00 piffio now, question
18:00 hchiramm_ joined #gluster
18:01 _pol joined #gluster
18:01 piffio since we're already using a SAN storage as backend, does it make sense to create an iSCSI volume for each storage server, and then use the mounted volumes (ext4 / xfs) as bricks gluster?
18:04 avati joined #gluster
18:07 kiwikrisp bsaggy: Yes, though you encounter complications if you have multiple clients connecting to the same data. If you need multiple clients connecting to the same data then you'd be best off spinnig up a Linux VM using NFS to serve the data store on the Windows Storage device mounted through iSCSI. That sounds more complicated than it actually is. Can this Windows storage device act as a NAS (expose CIFS or NFS shares)?
18:07 kiwikrisp If it can you could server the data directly form the storage device as well.
18:07 Locane can two nodes share same volume on fiber stack?
18:08 Locane so i don't need distrbute, mirror or stripe data.
18:10 partner piffio: why not, that gives you the ability to move that brick anywhere without affecting the clients. like, if your usage requirements grow and SAN can't provide it or you want to go on top of HW you just replace the brick with the new one
18:11 kiwikrisp piffio: that should work. I've used that configuration before for testing purposes.
18:11 partner thought i'm only using such possibilities over fibre so in case there is anything with combining those two i wouldn't know, thus try it out
18:12 piffio good, thanks
18:12 piffio I'll run some tests and see if there is any major performance impact
18:13 piffio kiwikrisp: did you use the SAN network (we use iSCSI 10Gb) for the gluster filesystem as well, or a separate network?
18:13 kiwikrisp piffio: SAN network.
18:13 partner for that same reason i've sometimes set up a simple single brick volumes, to have it easily extended or moved around
18:14 piffio cool, thanks guys
18:14 piffio I'm going to try it out and come back with probably more questions :)
18:18 hchiramm_ joined #gluster
18:19 glusterbot New news from newglusterbugs: [Bug 1038261] Gluster volume status hang <http://goo.gl/oSo2CE>
18:20 pravka_ joined #gluster
18:21 pravka joined #gluster
18:24 cfeller joined #gluster
18:27 bsaggy kiwikrisp: OK, yea, I probably wouldn't want to use iSCSI then.  I'm not sure if the Windows storage can act as a NAS; I'll have to look into that.
18:27 lpabon joined #gluster
18:31 kiwikrisp bsaggy: I wouldn't rule it out all together, you just have to change the way you use it. You can't use if for the client connection but it's the best choice for the gateway machine connection to the backend storage device if the device doesn't support NAS functions.
18:32 bsaggy kiwikrisp: Cool.  I'll do more research into this altogether.  Really appreciate your help and ideas!
18:36 hchiramm_ joined #gluster
18:40 vt102 joined #gluster
18:44 Locane can two nodes share same volume on fiber stack data storage?
18:45 Locane so i don't need distrbute, mirror or stripe data.
18:53 semiosis Locane: wat?
18:55 failshell joined #gluster
18:56 vt102 left #gluster
19:01 aliguori joined #gluster
19:05 Locane semiosis: i have two gluster servers and have remote data storage where i need to do one volume. Is it possible to two server to share and use one data storage. Can i mount same volume on two different server?
19:06 semiosis Locane: gluster servers (usually) use local storage, not shared storage
19:06 Locane this is not usually :(
19:06 semiosis why don't you just use an NFS server with your shared storage?
19:08 Locane dont know.. i have assinget to make this happen :F
19:09 semiosis Locane: is this a job assignment or school assignment?
19:10 Locane job assingment
19:10 MrNaviPacho Locane: what is the actual goal?
19:11 Gugge if you have shared storage, just use it :)
19:11 Locane two server share database and if other server goes down we still have access to data
19:11 MrNaviPacho Locane: you don't want gluster.
19:11 Locane im getting in there
19:12 Locane that i don't want it
19:12 Gugge how do you access the storage system?
19:12 hchiramm_ joined #gluster
19:12 Locane vmware
19:12 Gugge so, two guests with a shared blockdevice?
19:12 Locane i have added datastorage as hdd in virtual server
19:13 Locane in both server
19:13 Gugge so, two guests with a shared blockdevice?
19:13 Locane yeah
19:13 Gugge use a cluster fs then
19:13 semiosis i dont think it's possible to use glusterfs for that
19:13 Gugge if you really need them to access the device at the same time
19:14 MrNaviPacho Even if it's possible it's pointless.
19:14 semiosis haha, true
19:14 Gugge or use heartbeart or pacemaker to have one of them active at a time :)
19:14 MrNaviPacho What about just using NFS and failover?
19:15 semiosis +1
19:15 Gugge that would require the storage to support NFS :)
19:15 MrNaviPacho No
19:15 MrNaviPacho You have it mounted.
19:15 MrNaviPacho Setup the two servers as NFS servers.
19:15 semiosis the vmware instances would do the nfs, failover between them
19:15 semiosis afk
19:15 Locane there will be multiple data querys so i think gluster would be puffer
19:15 Gugge Locane: gluster is not the tool for your problem :)
19:16 MrNaviPacho +1
19:16 Locane yeah :\ have to explain that to boss somehow
19:17 Locane boss wants it because it's easy to scale up
19:17 Locane when we get more datastores then i think gluster would be good
19:17 MrNaviPacho lol, I love management.
19:18 Locane :D
19:18 MrNaviPacho I don't know about the setup but you could prob split the block into two.
19:18 Locane i think its my only option
19:18 MrNaviPacho Then use gluster.
19:19 MrNaviPacho Do you know anything about the actual hardware?
19:19 Locane i know something about fiberstack. server hardware none. there is multiple VM on one server
19:20 Locane but yeah.. i think i need to split store to get gluster working
19:20 MrNaviPacho Eh, if that's what he wants.
19:21 MrNaviPacho Split the block into two.
19:21 Locane i know i could do this with one server and two clients
19:25 nikkk anyone know if it's common practice to have a cluster where each node is both client and server?
19:25 nikkk i just need to keep files replicated in real time across a bunch of servers without usiing a hack like rsync or unison
19:26 nikkk hoping to not have separate servers for metadata or file servers (nfs etc)
19:26 nikkk but i might be dreaming of a product that doesn't exist :)
19:26 MrNaviPacho Ya, I have a setup that's exactly that.
19:27 nikkk i'd imagine each node is connecting to localhost when mounting
19:27 MrNaviPacho Yes
19:28 nikkk the biggest problem i've run into when testing gluster is when i non-gracefully lose a server node everything else just kinda hangs out until the network.tcp-timeout expires
19:29 nikkk so if someone trips over a cable and kills one of 12 servers, then the other 11 hang until that timeout is hit
19:29 nikkk (in your setup)
19:29 nikkk or is there something i'm missing
19:30 MrNaviPacho Is this actually happening or are you guessing?
19:30 nikkk happening
19:31 nikkk does it with server/client model too though
19:31 nikkk i'm using VMs to do the testing though
19:31 nikkk if i disable the nic on one vm everything hangs until the timeout expires
19:31 nikkk then i can r/w again
19:31 nikkk maybe not the case with real hardware though?
19:32 kkeithley that's well understood. You can set shorter tcp timeouts. And move your cables where people won't trip over them.
19:32 hchiramm_ joined #gluster
19:32 nikkk :P
19:33 nikkk i mean a non-graceful failure in general
19:33 MrNaviPacho I have not had that issue, what is the default timeout?
19:33 nikkk 42 sec is default
19:33 nikkk i switched to 10
19:33 nikkk it's quicker but the hang still happens
19:34 nikkk i might just have to lab up some real hardware.. wondering if it's a vmware oddity
19:34 MrNaviPacho I don't think so.
19:34 nikkk you're using hardware or vms for the cluster?
19:35 kkeithley the hang occurs with bare metal. It's a tcp thing, it's all in the kernel. gluster can't do anything about it other then setsockopt a shorter timeout. You don't want to set it down to zero or you'll get spurious timeout storms in all likelihood
19:36 nikkk yeah of course
19:36 MrNaviPacho I'm using hardware, but I have a very odd setup.
19:36 nikkk just wanted to make sure i wasn't the only one :]
19:36 nikkk any pros/cons that youv'e encountereed over the server/client model?
19:36 MrNaviPacho Not really, seems to work great.
19:37 MrNaviPacho Like I said my setup is kinda crazy.
19:37 MrNaviPacho And I would guess that someone on here would yell at me for it.
19:37 nikkk dare i ask what makes it crazy?
19:38 MrNaviPacho I setup our application to run reads from the local brick directly.
19:38 nikkk without mounting?
19:39 MrNaviPacho Heh, I have it mounted and send writes to it.
19:39 MrNaviPacho But I send reads to the actual brick.
19:39 nikkk what's the benefit?
19:39 MrNaviPacho (not the mount)
19:39 nikkk oh, yeah that makes sense
19:39 nikkk kinda
19:39 nikkk :)
19:40 MrNaviPacho Reads and stat calls are as fast as local disk.
19:40 MrNaviPacho Writes still get synced.
19:41 MrNaviPacho I don't know enough about gluster's architecture to say if this is bad or not.
19:42 nikkk i haven't found any real documentation on that topic
19:42 nikkk because it's something i sorta wondered about too
19:43 MrNaviPacho I think that it would be an issue if they got out of sync.
19:43 MrNaviPacho Because it would not heal.
19:44 nikkk right
19:45 MrNaviPacho But it's a possible benefit.
19:45 nikkk yeah something to think about.  this setup is for a forum with a few thousand user uploads per day
19:46 MrNaviPacho Should be fine.
19:47 MrNaviPacho I was forced to do what I did because the php application we are using does way too many calls to stat ever pageload.
19:47 MrNaviPacho every*
19:48 nikkk have you thought about php apc?
19:48 semiosis nikkk: plenty of people have client mounts on servers.  couple things to note... this is prone to split-brain in network partitions, if writes go to same file on the two servers when they're not connected to each other... use quorum to prevent this
19:49 semiosis apc helps, especially if you turn off stat (and reload apache whenever php files change)
19:49 semiosis also using an autoloader helps, but you need application support for that
19:50 nikkk i was just going to sugesst apc
19:50 semiosis you already did :)
19:50 nikkk did i?
19:50 nikkk haha sorry distracted
19:51 nikkk i'm not familiar with quorum
19:51 semiosis another note about reading directly from bricks, you should use noatime,nodiratime on your brick mounts (should do this anyway) but it's real important when reading from bricks since without that even reads modify the brick fs
19:52 hchiramm_ joined #gluster
19:53 vpshastry left #gluster
19:57 nikkk makes sense
19:57 nikkk i've never really found a good use for *atime anyway
20:03 zerick joined #gluster
20:13 andresmoya joined #gluster
20:17 andresmoya can you read write to a geo replication slave
20:26 bsaggy joined #gluster
20:26 bsaggy joined #gluster
20:44 smellis so I was excited about libgfapi support in rhel 6.5 (centos 6.5), but my libvirt doesn't seem to understand pool type gluster, as per the libvirt docs.  Am I missing something?
20:50 badone joined #gluster
20:57 _pol joined #gluster
21:07 psyl0n joined #gluster
21:12 sjoeboo joined #gluster
21:36 elyograg Regarding my problems doing a rolling upgrade from 3.3.1 to 3.4.1 ... I just noticed that the instructions started off with "stop all glusterd, glusterfs, and glusterfsd processes."  I didn't do this ... I upgraded the packages and rebooted.  Could that be a problem?
21:37 elyograg upgrading two of the servers in this manner made the filesystem basically freak out -- input/output errors, unable to read directoryies, etc.  I upgraded the other servers and clients in the cluster and now everything seems to be OK.
21:42 theron joined #gluster
21:43 semiosis are servers also clients?
21:43 semiosis usually you need to upgrade all servers before any clients
21:43 semiosis if servers are also clients, this is not possible to do in a rolling manner
21:48 B21956 joined #gluster
21:49 andresmoya is it possible to write to a geo replication slave?
21:53 elyograg Some of the servers do have the volume mounted locally, but I did not initially upgrade those.
21:54 elyograg The two servers that got initially upgraded were the ones that I put in with "add-brick"
21:54 elyograg they did not have the volume mounted.
22:17 psiphi75 joined #gluster
22:18 cogsu joined #gluster
22:22 elyograg semiosis: it's all on a testbed anyway.  I'm just wondering whether the upgrade could go badly because I didn't stop glusterd, glusterfs, and glusterfsd processes before I upgraded the packages.  I rebooted immediately after upgrading the packages, and I only did one server at a time.
22:25 failshel_ joined #gluster
22:34 delhage joined #gluster
22:34 _pol_ joined #gluster
22:44 andresmoya left #gluster
22:48 neofob left #gluster
22:57 psyl0n joined #gluster
23:05 theron joined #gluster
23:09 devoid left #gluster
23:13 rwheeler joined #gluster
23:18 gdubreui joined #gluster
23:26 khushildep joined #gluster
23:45 smasha82 joined #gluster
23:46 smasha82 Hi - I have updated my glusterfs-server package via apt-get and now gluster is stating no volumes present
23:46 smasha82 all of the config files are still present
23:47 smasha82 how can i re-import the config
23:48 daMaestro from what version to what version?
23:48 smasha82 looks like it went from Version: 3.2.5-1ubuntu1  to Version: 3.4.1-ubuntu1~precise1
23:51 gdubreui joined #gluster
23:52 semiosis see ,,(upgrade notes)
23:52 glusterbot I do not know about 'upgrade notes', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
23:53 semiosis ,,(3.3 upgrade notes)
23:53 glusterbot http://goo.gl/cuizcJ
23:53 semiosis ,,(3.4 upgrade notes)
23:53 glusterbot http://goo.gl/1x0Erz
23:54 smasha82 ok
23:54 smasha82 issue resolved
23:54 smasha82 the install went into /var/lib/glusterd
23:55 smasha82 moved my config from /etc/glusterd into /var/lib/glusterd and welcome back volume
23:55 psyl0n joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary