Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 mortuar joined #gluster
00:16 dcope joined #gluster
00:30 dcope hm so does anyone know how the load balancing works?
00:31 dcope ie if the client decides to send a request for a file to a server in the pool...
00:31 dcope does the client pass off the connection or does the client act like a proxy
00:42 jobewan joined #gluster
01:12 sonicrose dcope, load balancing is dependent on your volume configuration, every configuration will work differently
01:12 sonicrose not sure what you mean about handing off connections or proxys
01:13 dcope sonicrose: so imagine you have a person who requests a file by asking the machine running GlusterFS client
01:13 sonicrose do you mean glusterfs client or nfs client too makes a difference
01:13 dcope and gluster decides to get the file from "server 1 in the pool"
01:13 dcope how does gluster transfer the file?
01:13 dcope directly to the person requesting or does it go back to the client then to the person requesting
01:15 sonicrose well, when you create a glusterfs mount, there is a client that handles that mount.  that client has to mount some IP address to contact the cluster, it can be any IP address in the pool that's running glusterd and is a peer in the cluster
01:15 sonicrose but that mount IP has no impact at all really when it comes to where the data comes and goes from
01:16 sonicrose when you read from a file your client starts sending requests for the file
01:16 dcope sonicrose: since the files are on the client, can it respond with the file from the client?
01:17 dcope or will it always read from a machine in the app pool?
01:17 sonicrose there are no files kept on the client
01:17 sonicrose which ever server responds first that has the file will send the requested data to the client
01:17 dcope interesting...
01:18 dcope the way i just set it up there were files on the client
01:18 dcope and when i created them i saw them get replicated across to the 2 pooled machines
01:18 sonicrose are you using the same machine as the server and the client?
01:18 sonicrose ie: are there gluster bricks running on the same machine you are using the client from
01:18 dcope no
01:19 dcope so i had one machine with the client running. and then 2 machines with the server running.
01:19 sonicrose gluster mounts work in essence just like an NFS mount in the sense that when you ls the directory you're seeing the directory listing the remote systems.  nothing gets stored on the client.  when you read and write to those files you are reading and writing directly across the network
01:20 sonicrose theres no sort of local file caching
01:20 sonicrose ie: dropboxesque
01:20 dcope yeah i mounted the volume, then created the files on the machine running the client
01:20 sonicrose in otherwords, running the gluster client on a machine will consume no disk space on that machine aside from logs that it makes
01:21 dcope oooh
01:21 dcope ok it is making more sense now
01:21 dcope sonicrose: so then next question... if you install the client on a machine with files you want to add to gluster
01:21 sonicrose if you created a file inside a mounted volume then it was written out to the bricks in the cluster
01:21 dcope you can copy them into the mount, then delete them?
01:21 sonicrose you have to just copy them yea
01:21 dcope then they're safe to delete from the original location?
01:21 sonicrose cp works good, make sure you use something that maintains spare files
01:22 lyang0 joined #gluster
01:22 sonicrose after you're satisfied that your data is intact and working successfully on the gluster volume then yes
01:22 dcope nice
01:22 dcope sonicrose: does gluster do any sort of caching for frequently accessed files?
01:22 sonicrose it is nice
01:22 sonicrose yes
01:23 dcope cool
01:23 sonicrose depends on how much RAM you've got
01:23 dcope 32gb in the machine i intend on running the client on
01:23 sonicrose the client ram doesn't matter
01:23 sonicrose the caching is on the brick servers
01:23 dcope oohh
01:23 sonicrose although
01:24 sonicrose i suppose your client would probably use up VFS too
01:24 sonicrose i have direct IO enabled on my client sides
01:24 sonicrose so that it doesn't fill my client's memory
01:24 sonicrose in my case the clients have < 1GB of ram
01:24 sonicrose they use like 440MB
01:25 sonicrose i have 8GB of RAM in each of my 6 brick server VMs
01:25 dcope and that is sufficient?
01:25 sonicrose the brick servers will use up all available RAM in the VFS cache for data accessed
01:25 sonicrose so in my case i've got like a 48 GB RAM cache
01:26 sonicrose you can double buffer though i suppose and have the client also cache the files
01:26 sonicrose double layer ram cache
01:26 sonicrose chances are it probably is already doing that
01:26 dcope cool
01:27 dcope so i intend on having Client (with Nginx) -> 2 pooled servers
01:27 dcope to start
01:27 sonicrose if you write out the file and it says it completed at 1.9GB/sec then it probably got ram cached.  if you check your NIC card on the client you should see that it sent X mb/sec.  the gluster client writes the data to both bricks at the same time, so if you see the nic is sending 100MB/sec, your application is only really seeing 50MB/sec
01:28 sonicrose assuming you have 2 brick replicas
01:28 dcope ah
01:28 dcope that is good to know
01:28 sonicrose if you had replica 3 then 100MBs on the nic would only get you 33MB/sec to the app
01:29 sonicrose but reading is a different story
01:29 dcope the data is *always* sent from the client to the end user correct?
01:29 sonicrose it comes from the 1 server that responds first
01:29 dcope im wondering if a 1gbps port to the client which has a 10gpbs port would be sufficient
01:29 sonicrose yes, if you're caching the data client side, then the VFS cache pressure settings would dictate when it actually would write that data out of the cache and to the file system
01:30 dcope well
01:30 dcope the connection is always user <-> client <-> pool
01:30 dcope correct?
01:31 sonicrose if your brick servers have 10gbe to the switch fabric and your client has 1gb, and assuming your disks can keep up, then you'd expect about 110MB/sec reads and 55 MB/sec writes
01:31 dcope so a user will never be served a file directly from the pool?
01:31 sonicrose user -> application -> file system -> gluster client -> network -> gluster brick server
01:31 harish__ joined #gluster
01:32 dcope hm 110MB/s seems slow
01:32 dcope but i guess if quickly accessed files are cached it should be ok for bursts
01:32 sonicrose in this case the application doesn't know its doing anything differently when it reads/writes from a gluster mount, it thinks its just like any other file system, so the linux VFS cache will do its things and handle it like its just another disk
01:32 sonicrose nginx is your application right
01:32 dcope yes
01:32 sonicrose so the user in this case is actually behind a browser
01:33 dcope yup
01:33 sonicrose client -> browser app -> network -> nginx -> filesystem -> gluster client -> network -> gluster server
01:33 sonicrose nginx is the only thing that sees this file system
01:33 dcope yeah
01:34 sonicrose the file would be read from the gluster disk by nginx and then sent back over the network by HTTP or HTTPS
01:34 sonicrose the actual user never makes any gluster connections, just HTTP
01:34 dcope ah
01:34 sonicrose nginx doesn't make gluster connections either
01:34 sonicrose it just uses the filesystem provided in linux
01:35 sonicrose if the files it wants happen to be mounted by gluster then the data will come and go from the brick servers in the background
01:35 Alex I have seen non zero amounts of issues with that setup, fwiw.
01:35 dcope Alex: that's great to hear :)
01:35 sonicrose nonzero amounts of issues?
01:35 Alex Non zero.
01:35 Alex AS in >0
01:35 dcope oh
01:36 dcope what problems?
01:36 Alex Such as the gluster brick processes holding open thousands (>100k) file descriptors, and the vfs cache seemingly *not working* - repeated requests for the same range (Range=bytes x-y at the HTTP level) still causing a brick operation.
01:36 dcope yikes..
01:37 Alex I'm still trying to get to the bottom of it - I am almost sure it is a minor misconfiguration or bug. Our workload is somewhat specific - with many many many range requests on large files.
01:37 dcope oh
01:37 dcope perhaps i will not run into those then
01:37 dcope i am serving small files (5 - 15mb) and no range requests
01:37 Alex We're moving towards NFS currently as in our testing it seemed to suck much less for the same setup. But - I guess what I'm trying to say is, make sure you do representitive testing. :)
01:38 Alex Which, y'know, is only 6 words - so I could've said that in the first place.
01:38 dcope heh
01:38 sonicrose well fwiw i've had like hundreds of issues getting gluster going, and they all basically result in lockups and hung open files, stale file hands, bad descriptors, you name it, all different ways for saying the plumbing got clogged.
01:38 Alex Hehe, yeah, that kinda thing
01:38 dcope how'd you fix it?
01:38 dcope restarting the daemon?
01:39 Alex For me, I had to kill the buggered brick process
01:39 dcope i'm looking to use this instead of a traditional cdn for cost reasons
01:39 Alex (and pray)
01:39 dcope i don't want to have to worry about it
01:39 dcope lol
01:39 sonicrose restarting the daemon would usually unclog the pipes but they just clog back up.  the root cause not having all the settings on the gluster volume just right
01:39 Alex dcope: If I told you this happened to be being used for a CDN origin... cough :)
01:39 sonicrose once i found the right formula then everything is peachy
01:39 dcope heh
01:40 sonicrose the # biggest culprit of crashes and lockups i had was the NFS.DRC feature
01:40 sonicrose memory leak city
01:40 Alex Ah, interesting
01:40 dcope hm i wont be using nfs
01:40 sonicrose once that was sorted out and my NFS servers were no longer getting killed by OOM-killer
01:41 sonicrose that's now disabled in the current release
01:41 sonicrose 2nd problem i had was with NFS NLM not working correctly
01:42 sonicrose stat-prefetch on also really screwed things
01:44 sonicrose started making all my files say they were 2.1TB
01:44 sonicrose and everything would lock up
01:44 sonicrose instead making sure that rpc.statd was running on the gluster servers... (/etc/init.d/nfslock) that fixed almost all the rest of the lockups
01:44 sonicrose also i think of my own fault some how I had my volume option set to cluster.eager-lock enable instead of eager-lock on
01:44 sonicrose so maybe my eager-locks were also not working until now
01:44 sonicrose but i dont have lockups anymore, except when i'm trying to run my backups which are accessing files which are huge and constantly being written to.  i hope i've solved that contention by fixing eager-lock
01:45 dcope if a file gets deleted from one pooled machine is it smart enough to pull it from another?
01:46 sonicrose well, if the app deletes a file thru to a gluster mount the file will be simultaneously deleted from both bricks
01:46 dcope yeah but on the brick machine... say a hdd dies
01:46 dcope and a new one goes in
01:46 sonicrose if, you lost a brick though, and have the data on another brick, you can replace the brick and it will self heal
01:46 dcope nice
01:48 sonicrose for like 9 months everyone has told me that getting gluster based storage to work on citrix xenserver was not possible.  i even had contractors quit the job lmao
01:48 sonicrose just in the past few weeks though i got it nailed
01:48 dcope lol
01:48 dcope nice
01:50 sonicrose im quite proud of this configuration... i'm using the same 3 servers to run the virtualization, and the storage... totally SANless, but i can still have the live migration features and HA daemon stuff going on
01:50 sonicrose i can take a whole server offline for reboots and the storage just keeps on going
01:50 sonicrose when i bring the server back online it re-heals
01:51 sonicrose i did rolling restarts of all 3 hosts and thus all 6 brick VMs and kept 100% uptime on the VMs running above it
01:51 sonicrose no lockups, no splitbrain
01:52 sonicrose took under an hour to do all 3 hosts
01:52 sonicrose built this latest pool in 8 hours flat
01:53 Alex So the VMs blockstorage runs just on gluster fuse mounts?
01:53 Alex s/blockstorage/storage/
01:53 glusterbot What Alex meant to say was: So the VMs storage runs just on gluster fuse mounts?
01:53 sonicrose nope alex, NFS
01:53 Alex Ah
01:53 sonicrose the xenserver hypervisor is unmodified
01:54 sonicrose xenserver requires direct IO for it's vm storage.  it can get direct IO OK on an NFS mount, but not on a fuse mount
01:54 sonicrose since the el5 fuse kernel module doesn't have support for direct IO
01:54 sonicrose but... i have a VM that runs on each of the 3 hosts, and each host points to its own VM for the NFS traffic
01:55 sonicrose so the NFS all goes over localhost
01:55 sonicrose effectively
01:55 sonicrose and the only traffic physically coming off the servers themselves is fuse traffic
01:55 sonicrose so it 'fans out'
01:58 sonicrose all told, there are 3 hosts, 3 NFS VMS (which can live migrate between the hosts), and 2 gluster server VMs per host (each one bound to a different 10GbE NIC), each gluster server VM has 3 drives and hosts 3 bricks
01:58 sonicrose the gluster server VMs can't live migrate because they have the hard drives attached as local storage
01:59 sonicrose i did actually modify udev on the xenserver host so that it would show my hard drives to VMs as removable storage and not as local storage
01:59 sonicrose but it works the same way, now i can hot swap hard drives directly to the VMs
01:59 sonicrose so i did modify dom0 shame on me
02:01 harish__ joined #gluster
02:02 dcope so if i want to emphasize in memory file caching... i can just get servers (the machines in the app pool) with beefy ram right?
02:03 dcope sorry for the beginner questions... i've just started looking into glusterfs a few hours ago
02:03 sonicrose if you can afford it, SSD caching of the file system is probably the best way to go
02:03 sonicrose but, if not, getting like 64GB of RAM or 128GB of ram in a server isn't usually that much cost difference anymore
02:04 sonicrose ideally you'd wanna get a RAID card that supports SSD and RAM caching onboard...  configure a bunch of little RAID0 's
02:04 dcope oh
02:04 sonicrose but... that's overkill if you're just using 1gbe
02:04 dcope im going to get a 10gbps machine
02:05 sonicrose i think a pair of regular sata drives can max out a 1gbe link
02:05 sonicrose 10gbe seems to take us past the treshold of acceptable performance on shared storage
02:05 dcope :)
02:05 sonicrose 1gbe is just not enough
02:05 sonicrose when you have a bunch of VMs all sharing it
02:05 sonicrose now you're not running VMs
02:06 sonicrose you're just pulling files with HTTP so the demands are different
02:06 dcope nope, no VMs here
02:06 dcope yeah
02:06 sonicrose i'd say that running virtual machine images are multitudes of more intense on the demands
02:07 sonicrose so i had to setup my gluster with striping to increase the performance
02:07 sonicrose now i can use up most of my 10gbe doing writes
02:07 sonicrose but i still max out the read speeds of my drives before i saturate 10gbe
02:12 dcope do you have sata drives or ssds?
02:12 dcope the bottleneck im hitting right now seems to be the read speeds of my sata drives are being just hammered
02:13 dcope which is why i've been investigating glusterfs :)
02:36 gildub joined #gluster
02:43 sonicrose im using a total of 18 sata 7200 2tb drives
02:45 sonicrose but, i build another pool where there was 9x 120GB SSDs and 27 SAS drives
02:45 kshlm joined #gluster
02:45 plarsen joined #gluster
02:52 ThatGraemeGuy joined #gluster
02:57 sonicrose @all... is it possible that i'm not going crazy when i see a speed increase mounting with IP address instead of DNS name?  like a significant speed increase... and i'm already using /etc/hosts for everything
02:58 sonicrose like mount 127.0.0.1:/myvol reads 294MB/sec but mount localhost:/myvol reads 68MB/sec
02:59 sonicrose that makes no sense unless the fuse client is doing a dns lookup on every block read
03:00 sonicrose could gluster run without any hostnames?  just IP only?
03:01 sonicrose sounds like my next experiement
03:32 rejy joined #gluster
03:35 dcope http://gluster.org/community/documentation/index.php/Gluster_3.1:_Understanding_Load_Balancing
03:35 glusterbot Title: Gluster 3.1: Understanding Load Balancing - GlusterDocumentation (at gluster.org)
03:36 dcope so the "Gluster Distribute capability" comes for free, enabled by default etc?
03:36 dcope (this just seems too good to be true lol)
03:37 bharata-rao joined #gluster
03:39 atinmu joined #gluster
03:44 nbalachandran joined #gluster
03:49 RameshN joined #gluster
03:50 shubhendu joined #gluster
03:57 itisravi joined #gluster
03:57 plarsen joined #gluster
04:02 kdhananjay joined #gluster
04:04 atalur joined #gluster
04:05 atalur joined #gluster
04:07 _Bryan_ joined #gluster
04:12 atalur joined #gluster
04:16 nishanth joined #gluster
04:21 dcope joined #gluster
04:27 kshlm joined #gluster
04:28 ndarshan joined #gluster
04:35 cjhanks joined #gluster
04:35 saurabh joined #gluster
04:38 Rafi_kc joined #gluster
04:38 anoopcs joined #gluster
04:40 ppai joined #gluster
04:43 [HACKING-TWITTER joined #gluster
04:49 anoopcs joined #gluster
04:53 dcope joined #gluster
04:55 aravindavk joined #gluster
05:01 deepakcs joined #gluster
05:03 rastar joined #gluster
05:11 saurabh joined #gluster
05:14 prasanth_ joined #gluster
05:15 ramteid joined #gluster
05:17 kanagaraj joined #gluster
05:18 [HACKING-TWITTER joined #gluster
05:23 kumar joined #gluster
05:30 spandit joined #gluster
05:36 [HACKING-TWITTER joined #gluster
05:38 [HACKING-TWITTER joined #gluster
05:41 psharma joined #gluster
05:41 raghu joined #gluster
05:42 [HACKING-TWITTER joined #gluster
05:43 ctria joined #gluster
05:45 [HACKING-TWITTER joined #gluster
05:47 [HACKING-TWITTER joined #gluster
05:52 jiffin joined #gluster
05:54 rastar joined #gluster
05:56 XpineX joined #gluster
06:00 LebedevRI joined #gluster
06:07 Philambdo joined #gluster
06:08 itisravi_ joined #gluster
06:10 lalatenduM joined #gluster
06:11 sputnik13 joined #gluster
06:12 cjanbanan joined #gluster
06:14 vpshastry joined #gluster
06:14 XpineX joined #gluster
06:15 rastar joined #gluster
06:16 kanagaraj joined #gluster
06:27 ricky-ti1 joined #gluster
06:28 purpleidea joined #gluster
06:47 meghanam joined #gluster
06:47 meghanam_ joined #gluster
06:50 glusterbot New news from resolvedglusterbugs: [Bug 1121517] Effort to track the XML output for various geo-rep commands. <https://bugzilla.redhat.com/show_bug.cgi?id=1121517>
06:55 dcope joined #gluster
06:57 dusmant joined #gluster
07:00 vu joined #gluster
07:03 sputnik13 joined #gluster
07:03 Nightshader joined #gluster
07:04 rtalur_ joined #gluster
07:07 rtalur__ joined #gluster
07:10 Nightshader joined #gluster
07:12 keytab joined #gluster
07:16 deepakcs joined #gluster
07:17 itisravi joined #gluster
07:19 ctria joined #gluster
07:22 nbalachandran joined #gluster
07:22 kumar joined #gluster
07:23 dusmant joined #gluster
07:24 ppai joined #gluster
07:24 atalur joined #gluster
07:26 harish__ joined #gluster
07:29 LebedevRI joined #gluster
07:32 prasanth_ joined #gluster
07:39 mbukatov joined #gluster
07:39 dusmant joined #gluster
07:42 nbalachandran joined #gluster
07:42 kumar joined #gluster
07:45 ppai joined #gluster
07:46 rtalur__ joined #gluster
07:47 ekuric joined #gluster
07:52 atalur joined #gluster
07:53 liquidat joined #gluster
07:56 dcope joined #gluster
07:56 liquidat joined #gluster
08:09 xavih joined #gluster
08:12 SpComb joined #gluster
08:19 cjanbanan joined #gluster
08:19 rtalur_ joined #gluster
08:21 itisravi_ joined #gluster
08:37 kumar joined #gluster
08:39 vimal joined #gluster
08:42 itisravi joined #gluster
08:45 cjanbanan joined #gluster
08:46 kanagaraj joined #gluster
08:48 spandit joined #gluster
08:52 dusmant joined #gluster
09:16 lalatenduM joined #gluster
09:32 qdk joined #gluster
09:43 RameshN joined #gluster
09:44 harish__ joined #gluster
09:46 shubhendu joined #gluster
09:47 nishanth joined #gluster
09:53 rtalur_ joined #gluster
09:54 _nixpanic joined #gluster
09:54 _nixpanic joined #gluster
09:55 dusmant joined #gluster
09:56 _nixpani1 joined #gluster
09:56 _nixpani1 joined #gluster
09:58 dcope joined #gluster
09:58 glusterbot New news from newglusterbugs: [Bug 1121584] remove-brick stop & status not validating the bricks to check whether the rebalance is actually started on them <https://bugzilla.redhat.com/show_bug.cgi?id=1121584>
10:01 Norky joined #gluster
10:01 spandit joined #gluster
10:02 sputnik13 joined #gluster
10:05 bala joined #gluster
10:09 rjoseph joined #gluster
10:19 vu joined #gluster
10:28 glusterbot New news from newglusterbugs: [Bug 1121593] [gluster-cli] Better key matching logic for same keys across different domains <https://bugzilla.redhat.com/show_bug.cgi?id=1121593>
10:32 Slashman joined #gluster
10:33 rtalur_ joined #gluster
10:40 nishanth joined #gluster
10:41 dusmant joined #gluster
10:51 vu joined #gluster
10:51 sputnik13 joined #gluster
10:57 Pupeno joined #gluster
11:08 shubhendu joined #gluster
11:10 rtalur_ joined #gluster
11:17 ppai joined #gluster
11:19 kshlm joined #gluster
11:29 tdasilva joined #gluster
11:31 Norky I have been asked for access stats on GlusterFS clients similar to nfsiostat. I was thinking that the FUSE plugin for dstat ( https://github.com/dagwieers/dstat/blob/master/plugins/dstat_fuse.py ) would do the job but it doesn't work, because apparently the GLusterFS FUSE mounts are not the same as 'normal' FUSE connections and there is nothing in /sys/fs/fuse/connections/
11:31 glusterbot Title: dstat/plugins/dstat_fuse.py at master · dagwieers/dstat · GitHub (at github.com)
11:32 Norky we've got tools for measuring the servers, however the end user wants stats from the client - what can I use?
11:39 diegows joined #gluster
11:43 prasanth_ joined #gluster
11:46 SpComb Norky: my glustefs mounts have /sys/fs/fuse/connections/ ?
11:47 Norky hmm, curious
11:49 Norky http://pastebin.com/WxmKfpLs
11:49 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:50 Norky http://paste.fedoraproject.org/119499/94342014
11:50 glusterbot Title: #119499 Fedora Project Pastebin (at paste.fedoraproject.org)
11:50 ppai joined #gluster
11:52 R0ok_ joined #gluster
11:52 Norky what version of gluster are you running?
11:54 dcope joined #gluster
11:56 Norky SpComb, what version of glusterfs are you running?
11:56 Norky I'm on 3.4.0.57 (official Red Hat package)
11:57 vpshastry joined #gluster
11:57 vu joined #gluster
11:58 kkeithley1 joined #gluster
12:00 dcope joined #gluster
12:06 ccha2 do commands "gluster volume ...." use lock ?
12:06 pdrakeweb joined #gluster
12:06 ccha2 because I got these messages some time
12:06 ccha2 [2014-07-21 11:58:27.443439] E [glusterd-utils.c:332:glusterd_lock] 0-management: Unable to get lock for uuid: a13c6dcb-b185-4cb6-9b18-4fb6ea976bad, lock held by: a056a5ae-b8b4-41bc-97ef-c5530c880732
12:07 ccha2 I have 3 replicated servers and and there a script which do gluster volume status ....." often on all replicated servers
12:09 ccha2 script does volume info, volume status, volume heal info, etc to check if everything is ok
12:19 rjoseph joined #gluster
12:19 andreask joined #gluster
12:22 rjoseph left #gluster
12:25 chirino joined #gluster
12:26 Norky you only need to run the script on one server
12:27 Norky because the volumes are on all servers, "gluster volume info foo" on any server will get the same information
12:29 dusmant joined #gluster
12:30 edward1 joined #gluster
12:30 vu joined #gluster
12:31 troj joined #gluster
12:36 Norky that's not an answer to the question about lock conflicts, simply an observation
12:37 kkeithley joined #gluster
12:37 ccha2 I want to run the script on all servers, because if 1 server got problem the check script still go on the others
12:39 B21956 joined #gluster
12:44 giannello joined #gluster
12:49 vu joined #gluster
12:49 cjanbanan joined #gluster
12:52 julim joined #gluster
12:57 ndk joined #gluster
12:58 bala joined #gluster
12:59 ndevos ccha2: so, yes, 'gluster volume ...' uses locks, it normally only allows one running command at the same time
13:01 nbalachandran joined #gluster
13:01 ctria joined #gluster
13:01 fsimonce joined #gluster
13:01 hagarth joined #gluster
13:02 dcope joined #gluster
13:03 dusmant joined #gluster
13:05 plarsen joined #gluster
13:08 Philambdo joined #gluster
13:19 cjanbanan joined #gluster
13:19 japuzzo joined #gluster
13:25 dcope joined #gluster
13:27 bene2 joined #gluster
13:30 R0ok_ joined #gluster
13:40 diegows joined #gluster
13:42 tony_g joined #gluster
13:44 dberry joined #gluster
13:44 dberry joined #gluster
13:45 tony_g i'm planning to upgrade my servers from ubuntu 13.04 to 14.04 LTS; before i do this i wanted to check if anyone has experienced any issues doing this? is there anything i should look out for?
14:00 georgeh|workstat joined #gluster
14:00 Norky well all my gluster clients have no entries in /sys/fs/fuse/connections/
14:00 Norky SpComb, are you sure you have entries in /sys/fs/fuse/connections/ ?
14:01 Philambdo joined #gluster
14:01 mortuar joined #gluster
14:02 SpComb Norky: yes, same amount of dirs as I have glusterfs'en mounted, and they have .../waiting
14:02 ccha2 does locks block replication until locks are released ?
14:02 ccha2 or block healing ?
14:03 wushudoin joined #gluster
14:03 Norky SpComb, what version of glusterfs on what distro?
14:03 ccha2 or it's just block others "gluster volume ..."
14:03 SpComb Norky: whatever Ubuntu trusty has
14:04 theron joined #gluster
14:06 chirino joined #gluster
14:08 jobewan joined #gluster
14:08 dcope joined #gluster
14:09 Norky anyone else? are there entries in /sys/fs/fuse/connections/ for your glusterfs mounts?
14:09 Norky might be the version of fuse that's the difference
14:09 chirino_m joined #gluster
14:10 tom[] i'm testing a 3-node replication system using gluster with virtual machines. after everything was operating, i deleted one of the vms and rebuilt it. i introduced it to the gluster by copying the uuid of the deleted node from one of the other nodes, probing and restarting glusterd. process adapted from http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
14:10 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
14:10 Norky though fuse is part of the kernel...
14:10 tom[] it works to get the data back online and replication seems to work but i don't understand the output of volume status: https://gist.github.com/tom--/1a53bbc001bfdbe74099
14:10 glusterbot tom[]: https://gist.github.com/tom's karma is now -1
14:10 glusterbot Title: gist:1a53bbc001bfdbe74099 (at gist.github.com)
14:12 tom[] 10.1.1.5 is the restored node. why do the old nodes not seem to see the restored one, and visa versa?
14:13 tom[] and why does it list everything offline?
14:13 ccha2 what about gluster peer status ?
14:13 gmcwhistler joined #gluster
14:14 tom[] ccha2: i added that to the gist
14:15 tom[] for all 3 nodes
14:15 ccha2 I don't see the gluster peer status
14:15 ccha2 there is just gluster volume status
14:16 tom[] i updated the gist
14:16 tom[] https://gist.github.com/tom--/1a53bbc001bfdbe74099 lines 9 31 and 54
14:16 glusterbot tom[]: https://gist.github.com/tom's karma is now -2
14:16 glusterbot Title: gist:1a53bbc001bfdbe74099 (at gist.github.com)
14:18 ccha2 I don't know,I never see the state State: Sent and Received peer request (Connected)
14:19 ccha2 about about logs ? error messages?
14:19 * tom[] goes look
14:28 tom[] there are errors after the last restart of glusterd
14:28 tom[] i added the log to the gist as another file
14:28 hagarth1 joined #gluster
14:29 tom[] and in the log for that volume, a steady repetition of:
14:29 tom[] [2014-07-21 14:27:43.103865] I [client.c:2097:client_rpc_notify] 0-gv0-client-0: disconnected
14:29 tom[] [2014-07-21 14:27:46.105645] W [socket.c:514:__socket_rwv] 0-gv0-client-0: readv failed (No data available)
14:30 Norky ahh, my problem is that the fusectl psuedo filesystem is not mounted
14:36 troj joined #gluster
14:38 doo joined #gluster
14:49 XpineX joined #gluster
15:04 jbrooks joined #gluster
15:04 rwheeler joined #gluster
15:07 mortuar joined #gluster
15:12 MacWinner joined #gluster
15:12 chirino joined #gluster
15:18 mortuar joined #gluster
15:19 cjanbanan joined #gluster
15:20 XpineX joined #gluster
15:20 theron joined #gluster
15:21 theron joined #gluster
15:24 troj joined #gluster
15:27 daMaestro joined #gluster
15:30 lmickh joined #gluster
15:30 chirino joined #gluster
15:33 troj joined #gluster
15:36 theron joined #gluster
15:37 theron joined #gluster
15:39 theron joined #gluster
15:40 nage joined #gluster
15:40 tdasilva joined #gluster
15:40 cjanbanan joined #gluster
15:43 chirino joined #gluster
15:43 troj joined #gluster
15:43 lalatenduM hchiramm_, ping
15:43 glusterbot lalatenduM: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:44 lalatenduM right! I forgot
15:46 glusterbot_ make sure you remember the next time
15:47 lalatenduM kkeithley_, yup :)
15:47 kkeithley_ ;-)
15:50 dcope joined #gluster
15:54 theron joined #gluster
16:04 Peter3 joined #gluster
16:22 cjanbanan joined #gluster
16:24 vu joined #gluster
16:33 caiozanolla joined #gluster
16:34 chirino_m joined #gluster
16:37 caiozanolla hello all, I setting up a replicated fs across amazon zones. since its replicated, when copying files to the fs there is a penalty on speed for the replication being across networks. question, can I copy files 1 2 3 to the 1ast host and the same files 1 2 3 to the other and avoid replication, thus decreasing the time it will take to have them replicated? (i have the same dataset on both zones.)
16:37 anoopcs joined #gluster
16:40 theron joined #gluster
16:40 troj joined #gluster
16:48 troj joined #gluster
16:49 cjanbanan joined #gluster
16:55 bene2 joined #gluster
16:56 JoeJulian So you are asking if you can manually replicate your data to avoid the overhead of replicating across high latency connections... As long as you're using a replicated volume across that high latency connection, that idea will do nothing for you. Typically you would use georeplication to duplicate your dataset from one zone to another.
17:01 anoopcs1 joined #gluster
17:04 nbalachandran joined #gluster
17:07 vu joined #gluster
17:09 anoopcs joined #gluster
17:10 recidive joined #gluster
17:11 chirino joined #gluster
17:12 dcope joined #gluster
17:13 zerick joined #gluster
17:17 dtrainor joined #gluster
17:18 _zerick_ joined #gluster
17:19 dtrainor joined #gluster
17:20 jobewan joined #gluster
17:21 dtrainor joined #gluster
17:26 nishanth joined #gluster
17:29 dtrainor joined #gluster
17:30 troj joined #gluster
17:33 dcope linux has some built in file caching stuff right? so if i get a gluster server with a lot of ram read files can be cached in memory to cut down on disk usage?
17:34 lalatenduM joined #gluster
17:46 tziOm joined #gluster
17:46 sputnik13 joined #gluster
17:46 sputnik13 joined #gluster
17:48 anotheral that's right
17:48 anotheral since bricks are just normal filesystems, all that should happen pretty transparently
17:50 anotheral http://www.tldp.org/LDP/sag/html/buffer-cache.html
17:50 glusterbot Title: The buffer cache (at www.tldp.org)
17:53 chirino joined #gluster
17:54 chirino joined #gluster
17:55 troj joined #gluster
17:55 _zerick_ joined #gluster
17:56 zerick joined #gluster
17:59 l0uis I've got a volume that is currently set to replica 2. If I want to go to replica 1, do I have to rebuild the volume or can the transition be done online by just changing the replica count?
17:59 dtrainor joined #gluster
18:00 dtrainor joined #gluster
18:02 dtrainor joined #gluster
18:03 dtrainor joined #gluster
18:05 _dist joined #gluster
18:06 dcope anotheral: awesome, thank you
18:06 dcope ordering 2 machines today to act as bricks with 128gb each
18:08 elico joined #gluster
18:12 tdasilva joined #gluster
18:22 caiozanolla JoeJulian Georeplication is not suitable since we need locking on files being created on both storages.
18:25 caiozanolla JoeJulian: actually, traffic among aws zones is preety low latency, its just initial load of files is taking too long because both sides have to be written simultaneously, thus sharing the machine available bandwidth
18:31 cfeller joined #gluster
18:32 troj joined #gluster
18:40 Pupeno semiosis: ping?
18:40 cjanbanan joined #gluster
18:44 chirino joined #gluster
18:48 caiozanolla ok, let me rephrase the question. suppose i have 2 gluster fs with identical files, can I make them a single replicated volume?
18:50 recidive joined #gluster
18:51 diegows joined #gluster
18:51 JoeJulian caiozanolla: In theory, it should work. You'd have to remove the trusted.gluster* extended attributes and the .glusterfs directory and create a volume with both the pre-populated bricks. Get beyond 2 bricks (one on each end) and there would be some other severe complications.
18:52 bene2 joined #gluster
18:52 caiozanolla JoeJulian: i have 7 bricks on each side
18:53 caiozanolla JoeJulian: let me make the whole thing clearer so maybe you guys can come up w some insight...
18:54 JoeJulian Can you use multiple clients to each write a portion of the total file set?
18:55 cjanbanan joined #gluster
18:56 rotbeard joined #gluster
18:56 caiozanolla We have an app, distributed between 2 availability zones on amazon. Each app instance may create or reuse content of the storage. The content may be created simultaneously by more than one app (just ist time). then, upon creation, all apps will reuse the data. We hava 2 servers, each has the same files on both side. I have created the gluster volume using both zones and im copying the files from one machine.
18:58 siel joined #gluster
18:58 caiozanolla JoeJulian: I surelly could, but that would mean double the traffic between zones and since this connection is bandwidth limited per machine, it would do no good on speeding things up. (have not tested, but theoretically it would do no better than already is)
19:01 JoeJulian Wouldn't the bandwidth limit also limit any *other* way of copying the files, making them all about equal?
19:04 ndk joined #gluster
19:10 caiozanolla JoeJulian: There are 4 machines, 1a 1s 2a 2s being 1a(appserver zoneA) 1s(storage zoneA). Since 1s and 2s are replicated, copying from 1a to 1s uses 100% bandwidth from 1a to 1s and 100% from 1s to 2s, meaning 1s is using 2x the bandwidth, thus limiting 1a-1s to 50% and 1s-2s to 50%. Now, If I did copy from 1a to 1s and from 2a to 2s, then could somehow made 1s and 2s replicated, I would use 100% from 1a-1s and 100% from 2a-2s. cutting total time by
19:10 caiozanolla half.
19:10 doo joined #gluster
19:11 caiozanolla JoeJulian: also, inter zone bandwidth is more restricted than amongst the same zone.
19:12 caiozanolla ops, bandwidth inside the same zone is less restricted than between zones.
19:12 JoeJulian Right, I expected as much.
19:15 JoeJulian Ah, here's the miscomprehension: Since 1s and 2s are replicated, copying from 1a to 1s uses 50% bandwidth from 1a to 1s and 50% from 1a to 2s.
19:15 JoeJulian replication happens at the client.
19:15 dtrainor joined #gluster
19:16 B21956 joined #gluster
19:17 JoeJulian So 2 clients should be able to saturate the servers. (assuming the same outgoing bandwidth restrictions at the client as incoming bandwidth restrictions at the server)
19:18 * dcope can't wait to get his gluster setup going...
19:19 JoeJulian "dcope> linux has some built in file caching stuff right? so if i get a gluster server with a lot of ram read files can be cached in memory to cut down on disk usage?" yes. You can experiment with read-ahead settings on the backing filesystem, too.
19:24 dcope sweet
19:27 MacWinner joined #gluster
19:32 caiozanolla JoeJulian: "replication happens at the client". well this is odd. I assumed repplication was being done at the server, not the client. Im not sure how this will affect the setup.
20:00 _dist JoeJulian: I have a theory that my self heal daemon is destroying the read iops of my local drives, can I safely kill the pid or is there a better way to do that?
20:01 andreask joined #gluster
20:02 jobewan joined #gluster
20:02 vu joined #gluster
20:08 Pork__ joined #gluster
20:08 Pork__ This is kind of cool
20:12 daMaestro joined #gluster
20:16 JoeJulian _dist: there is.
20:16 JoeJulian _dist: I mean you can
20:17 JoeJulian Interesting theory...
20:18 JoeJulian Self-heal traffic is supposed to happen at a lower priority within gluster, but I was wondering about that too.
20:19 _dist JoeJulian: I'm not 100% on it, would the issue (I've yet to test) which is suppose to be report only affect the heal behaviour of the shd?
20:20 JoeJulian No
20:20 _dist ah, then it would just be an shd throttle problem (if it were the case)
20:20 JoeJulian right
20:20 _dist it's not my shd pid that's eating up the iops, it's the brick pid
20:21 JoeJulian Right. shd is just a client.
20:22 _dist is there away I can track io or something related for the crawl itself?
20:23 Pork__ If you guys are the devs, got an easy one for you:
20:23 Pork__ If my glusterfs-server process isn't starting, which log file might tell me why?
20:24 Pork__ On Ubuntu 12.04
20:24 VerboEse joined #gluster
20:24 _dist etc-glusterfs-glusterd.vol.log probably
20:24 Pork__ Seems like some weird-ass race-condition causing my glusterfs-server process to not be started some of the time
20:24 Pork__ Thanks, bro. Will have a look
20:25 _dist you might want to look at the brick log as well in /var/log/glusterfs/bricks/
20:28 dtrainor joined #gluster
20:31 Pork__ Good look, dude
20:31 Pork__ I'm finding some interesting shit
20:31 JoeJulian Nah, we're not devs. We're ,,(volunteers).
20:31 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
20:32 JoeJulian Though semiosis is kind-of responsible for making sure glusterfs-server starts in ubuntu.
20:32 Pork__ I recognize his name from the PPA
20:32 Pork__ And yours from the blog
20:32 Pork__ Thanks for the blog, by the way, bro
20:32 JoeJulian You're welcome.
20:33 JoeJulian I'm working on some more content, but it'll take a while.
20:34 Pork__ What I have read has been incredibly helpful so far
20:34 drajen joined #gluster
20:36 Pork__ I am guessing that my issue is linked to the fact that my bricks live in a LVM, or the fact that I upgraded Gluster from 3.4 to 3.5
20:37 Pork__ Bricks could explain why it's only working some of the time
20:37 Pork__ Racing with LVM
20:39 drajen Hello Gluster community, I've got a tiny ask. Gluster to me have always been software being hardware agnostic, still, it seems Red Hat is forcing end users to use DAS or SAS JBODs for storage if they want a "supported" solution. Can anyone put some colour on that for me? If my shop decides to go DIY, what OS platform is the path of least resistance?
20:42 sputnik1_ joined #gluster
20:42 cogsu_ joined #gluster
20:44 JoeJulian drajen: Red Hat wants to sell what they have tested and have trained people to support. Least resistance, imho, is CentOS.
20:45 JoeJulian Pork__: There's a lot of us that back our brick filesystems with LVM.
20:45 Pork__ Thanks, Joe
20:45 _dist drajen: I suspect redhat wants each brick to be a disk, I'm assuming that's how they tested it ^^. Personally I'm running debian, but I agree with JoeJulian CentOS basically is RHEL
20:45 _Bryan_ joined #gluster
20:46 Pork__ I'm looking into a way to make sure that the glusterfs-server proc is started after the LVM is ready
20:46 JoeJulian And CentOS does some things smarter. Like it doesn't start services on install, but rather lets you enable them intelligently.
20:46 drajen JoeJulian: Thanks, so it's more of a scoping exercise for them at this point. The customer/integrator doesn't need SAN expertise and such, a dumb disk anyone can manage, right? :-)
20:47 JoeJulian If a mdadm raid isn't part of the operating system, it doesn't cause a headless install to drop to a shell.
20:47 JoeJulian right
20:48 JoeJulian semiosis: is on call 24/7 for free volunteer support. I can give you his home number...
20:48 Pork__ Hahaha
20:48 Pork__ That's brutal, man
20:48 JoeJulian ... sucks when he's afk...
20:49 mAd-1 if he has your address, I forsee a bag of flaming poop on your door step
20:49 drajen _dist: If they want the brick to be a disk, should they care at all what the fabric between the disk and the host is?
20:51 JoeJulian If you use something else, that's just one more component that they would have to troubleshoot.
20:51 _dist drajen: I'm not certain, it was just a guess. In some ways glusterfs can be software raid local, and in some ways it can be software raid networked. We've chosen to use for network only and use local raid for that part. But I honestly didn't do testing to compare efficiency or problems
20:52 drajen _dist: as long as Gluster servers don't share the same networked storage, fault isolation shouldn't be a problem, right?
20:52 JoeJulian And we see that here. We're going along trying to help someone with a certain set of assumptions, then suddenly they're like, "Oh, I think the server's having trouble with the lun from the san." and we're all throwing up our hands and starting to work on our alcoholism skill.
20:55 _dist drajen: for us (my company), the point of gluster is to distrubte or replicate over a network connection. But we'd always design it so a single failed network connection won't kill a volume (or we like to think we do)
20:56 Pork__ I use it so I can rip out a physical machine without downtime. Changed my life. Was able to finally stop drinking.
20:56 _dist Pork__: That's essentially why we use, but it was no reason to stop drinking.
20:57 Pork__ The drinking was getting bad. Needed to wait till people left the office to do work on sys
20:58 drajen _dist: I like the idea of keeping my compute out of my storage and vice versa. What backend are you using?
20:59 _dist We have mixed compute/storage nodes, something I've heard of few people doing honestly. The purpose of our gluster volumes is one for fileshare storage and one for VM storage.
20:59 _dist each machine is a gluster server and a VM host, running Debian + qemu-kvm + glusterfs
20:59 Pork__ Gotta mix them, bro. Don't waste anything.
21:01 JoeJulian I had mixed hosts at my former $dayjob. Since we have different teams working on compute and storage here, now I just work on storage.
21:01 Pork__ $dayjob
21:01 jobewan joined #gluster
21:02 purpleidea Pork__: JoeJulian's $dayjob is hosting the movie collections of the #gluster community.
21:02 purpleidea he's particularly efficient at it too
21:02 _dist I understand the arguments for and against mixed/split setups, it's a big decision to make.
21:02 Pork__ Seems like a legit usage of a Gluster deployment
21:03 purpleidea _dist: i'm a fan of mixed setups
21:03 Pork__ How do you guys label people in your message so it highlights. This is my second IRC adventure. Ever.
21:03 purpleidea Pork__:
21:04 Pork__ You just type it in?
21:04 purpleidea Pork__: what client are you using?
21:04 Pork__ prupleidea:
21:04 purpleidea nope
21:04 Pork__ Some web client
21:04 Pork__ I don't know
21:04 purpleidea try pressing tab after typing the first few letters of a nick
21:04 Pork__ webchat.freenode.net
21:04 Pork__ purpleidea: Nice
21:04 purpleidea pur<tab>
21:05 Pork__ The autocomplete worked
21:05 purpleidea Pork__: :) now when you have a chance, get a real client (#irssi)
21:05 Pork__ I suppose I will owe it to this community to contribute
21:05 Pork__ Because this is some valuable shit
21:06 purpleidea yup
21:06 JoeJulian It usually helps keep the conversations separated. Occasionally, though, I'll be helping one person, give them instructions preceded by their nick, then someone else will follow them.
21:07 purpleidea g2g
21:08 Pork__ Alright I am actually getting somewhere with this (which is surprising, because I am actually very stupid)
21:08 JoeJulian lol
21:09 _dist btw, it's not quite related by to get stats on my VM disk io (when using libgfapi) I used qm info blockstats
21:09 _dist definitely the easiest way to do it
21:10 Pork__ Fuck, I spoke too soon
21:10 Pork__ $ sudo service glusterfs-server status >>> glusterfs-server stop/waiting
21:10 _dist but thanks JoeJulian for your tap msg. I was going to write something myself until I found the qm info
21:10 _dist ttyl
21:10 Pork__ This thing hates me
21:11 JoeJulian qm info?
21:11 Pork__ JoeJulian: Me?
21:11 Pork__ JoeJulian: Oops, I get it
21:13 Pork__ Why would it not work during startup, but work when I call the script manually (the script that was supposed to be run at startup)?
21:14 JoeJulian /var/log/glusterfs/etc-glusterfs-glusterd.vol.log should tell you why it failed.
21:14 pasqd joined #gluster
21:16 Pork__ JoeJulian: It looked like the brick wasn't ready when the server tried to start. I am guessing it's a race-condition between LVM and glusterfs-server in Ubuntu. So I am saying "fuck you, Ubuntu", and I added a script to the if-up (network scripts) that starts the glusterfs-server and mounts my volume
21:17 JoeJulian Odd, glusterfs-server shouldn't even try to start until all the local disks are mounted.
21:18 Pork__ JoeJulian: I have been known to make mistakes... quite often
21:18 JoeJulian And let's keep the language professional please.
21:19 Pork__ JoeJulian: Hahah for the kids
21:19 JoeJulian Or for the adults that like to be adult.
21:20 JoeJulian I wouldn't want a coworker walking by and seeing my screen littered with profanities.
21:20 daMaestro joined #gluster
21:20 Pork__ JoeJulian: I suppose that's a good point. I used to work in a place like that
21:23 JoeJulian Also... because I have ban authority. ;)
21:23 Pork__ JoeJulian: Hahaha, there's also that
21:24 bene2 joined #gluster
21:24 Pork__ JoeJulian: Your chat, your rules. I gotcha, bro
21:25 Pork__ JoeJulian: Are the mount logs kept in the same place?
21:26 bala joined #gluster
21:26 Pork__ JoeJulian: I think I got the issue with the server worked out
21:26 JoeJulian Yes, /var/log/glusterfs/{mountpoint where / is replaced with -}.log
21:27 Pork__ JoeJulian: Thanks, bro
21:27 Pork__ JoeJulian: And as soon as I say something, the server is back to `stop/waiting`
21:29 JoeJulian Truncate your glusterd.vol.log, cause the problem, then go to fpaste.org and paste the log then paste the link it generates here.
21:29 JoeJulian I'll take a look and see if I see anything.
21:30 hagarth joined #gluster
21:32 Pork__ JoeJulian: Thanks, dude. Trying to recreate ie
21:32 Pork__ it**
21:33 Pork__ It seems to happen pretty randomly
21:34 Pork__ JoeJulian: http://fpaste.org/119661/59784651/
21:34 glusterbot Title: #119661 Fedora Project Pastebin (at fpaste.org)
21:36 JoeJulian Ah, getaddrinfo failures. Something network related.
21:36 JoeJulian This is in AWS, right?
21:37 Pork__ This is not AWS
21:37 Pork__ JoeJulian: This is not AWS
21:38 JoeJulian Oh, interesting. Someone else was seeing this on AWS and we suspected something related to that.
21:38 Pork__ JoeJulian: It's possible that the network isn't ready by the time the server starts
21:39 JoeJulian It shouldn't be, based on the upstart requirements for glusterfs-server.
21:39 Pork__ JoeJulian: I will need to double check, but I think that the bricks are defined by hostname instead of IP
21:39 JoeJulian I should hope so
21:40 Pork__ JoeJulian: Trying to think of potential issues
21:40 JoeJulian For now, add the server's own hostname to 127.0.0.1 in /etc/hosts
21:40 Pork__ JoeJulian: I would be surprised if it wasn't. Hold on
21:40 Pork__ JoeJulian: Confirmed. It is in there
21:41 sputnik13 joined #gluster
21:41 JoeJulian I assume DNS is external and is not one of the glusterfs servers.
21:41 Pork__ JoeJulian: Correct. Router.
21:43 Pork__ I am wondering if I shouldn't just try reinstalling glusterfs-server on this machine
21:43 Pork__ Then heal from the other
21:44 JoeJulian I don't think that would change anything.
21:45 Pork__ JoeJulian: Well I did upgrade 3.4 to 3.5 this morning...
21:45 Pork__ JoeJulian: Hopefully I didn't destroy anything
21:46 Pork__ JoeJulian: Although I am pretty sure I remember this being an issue in 3.4
21:47 Pork__ JoeJulian: I tried adding a command to my if-up.d/ folder that was starting the glusterfs-server, but it didn't work 100% of the time
21:48 B21956 joined #gluster
21:49 chirino joined #gluster
21:49 nage joined #gluster
21:51 recidive joined #gluster
21:52 JoeJulian Pork__: Do you have ipv6 disabled?
21:52 Pork__ JoeJulian: In Ubuntu? No
21:53 tristanz joined #gluster
21:53 tristanz are there any options in gluster to fully avoid split-brain yet?
21:56 Pork__ JoeJulian: Ok, Bro Julian, I think that I have the glusterfs-server issue resolved
21:57 JoeJulian tristanz: sure, there's a couple. Two different methods of quorum.
21:58 Pork__ JoeJulian: Never mind
21:58 daMaestro joined #gluster
22:07 vu joined #gluster
22:10 sputnik13 joined #gluster
22:15 oxidane joined #gluster
22:19 mrEriksson joined #gluster
22:26 Pork__ JoeJulian: Hey, dude, thanks for your help today
22:26 Pork__ JoeJulian: I am outy
22:26 JoeJulian Later
22:27 JoeJulian hmm... where did the client dumps end up...
22:29 kkeithley1 joined #gluster
22:29 mrEriksson Hey folks, I'm trying to build rpm packages for 3.5, but I keep getting complaints about python-ctypes not being available, even though it is installed and found by configure. Anyone seen this before?
22:31 JoeJulian I haven't tried (re)building 3.5 packages. Wrong architecture package is installed maybe?
22:32 mrEriksson Python packages?
22:32 JoeJulian iirc, ctypes is arch dependent
22:32 mrEriksson Most probably, yes, but I know nothing about python :-)
22:33 mrEriksson But if I just launch python and "import ctypes", I get no errors
22:33 mrEriksson But I don't really know if that means that things are OK or if something in python still could be b0rken
22:34 sspinner joined #gluster
22:34 JoeJulian which distro, which python version
22:34 mrEriksson Suse Linue Enterprise, python 2.6
22:35 mrEriksson SLE 11SP3 to be exact
22:35 Alex joined #gluster
22:36 mrEriksson Hmm, this could be an suse issue, found some comments in the source
22:48 daMaestro joined #gluster
22:56 tom[] could someone kindly confirm the procedure for replacing a failed replication server: https://gist.github.com/tom--/50a3ba067d441e9ed24b ?
22:56 glusterbot tom[]: https://gist.github.com/tom's karma is now -3
22:56 glusterbot Title: Replication node replacement (at gist.github.com)
22:59 siel joined #gluster
23:25 tristanz joined #gluster
23:32 troj joined #gluster
23:42 recidive joined #gluster
23:55 JoeJulian tom[]: That's how I've done it.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary