Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 tom[] joined #gluster
00:38 jaank joined #gluster
00:52 mattrixha joined #gluster
01:12 RicardoSSP joined #gluster
01:22 gildub joined #gluster
01:33 calisto joined #gluster
01:39 Gill Hey guys I was wondering if anyone is around they could chare some ways to optmize gluserfs for sound file recording?
01:44 glusterbot News from newglusterbugs: [Bug 1138992] gluster.org broken links <https://bugzilla.redhat.com/show_bug.cgi?id=1138992>
01:48 JoeJulian Gill, like for storing days upon days of recording, or recording multitrack performances?
01:49 Gill JoeJulian: storing voicemail files
01:49 Gill so 30 second - 1 minute recordings
01:49 Gill maybe more sometime and playing them back
01:50 Gill when i playbakc they keep getting cut off and are missing chunks
01:51 JoeJulian Those are usually under 150kbps so there shouldn't be any problems.
01:52 Gill i checked my gluster log and it seems like it keeps dropping the storage it keeps going up and down im thinking that may be my issue
01:52 Gill i changed it to using  aport number because gluser is running on both nodes on port 49152
01:53 Gill but in the logs it keeps trying to connect to 24007
01:53 Gill so i changed it to thisL gluster.az.internal:49152:/fs_storage
01:53 JoeJulian @ports
01:53 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
01:54 Gill oh
01:55 Gill but to connect to the volume it shouldnt be using the management port shoul dit?
01:55 JoeJulian That's how it finds out how the volume is configured.
01:55 JoeJulian So yes.
01:55 Gill ah
01:55 Gill ok makes sense
01:55 JoeJulian It connects to the management daemon, retrieves the volume configuration, then connects to all the servers that are part of the volume.
01:57 Gill just tested again i see nothing i nthe mnt logs
01:57 Gill but it skipped around in the recording then cut the last 5 seconds off
01:58 Gill maybe i need to cache some of the data locally?
01:59 JoeJulian I've used gluster as VM storage for freeswitch. Never had any problems.
01:59 JoeJulian VM being Voicemail...
01:59 Gill thats exaclty what im reying to do
01:59 Gill trying*
01:59 Gill with freeswitch
02:00 JoeJulian I was only peak recording about 100 messages simultaneously with no quality loss at all.
02:01 Gill i havent changed any settings though its vanilla glusterfs 3.5.3
02:01 JoeJulian Yeah, mine was all defaults.
02:02 Gill so i have no idea why its cutting off my audio
02:03 JoeJulian You've been doing FS for a long time, so I doubt it's anything network related or you would have found it by now...
02:04 Gill my glusterfs nodes are in different data centers
02:04 JoeJulian Oh, that'll do it. High latency's a bitch.
02:04 Gill maybe its the latency between the 2 glusters?
02:04 Gill crap
02:05 JoeJulian If it were me, I'd record to local disk and use a script to move it to the shared storage for that use case.
02:05 Gill is there a way to make it so each side connects only to its local gluster server and then the gluster servers share the files?
02:05 nangthang joined #gluster
02:05 Gill ok makes sense
02:05 JoeJulian Not really, writes are synchronous.
02:05 JoeJulian reads will come from the local server though.
02:06 Gill so i’ll have to rsync to gluster and rsync back down
02:06 Gill oh so i can have it write to the local filesystem and have it read from gluster?
02:06 JoeJulian yeah
02:06 gem joined #gluster
02:07 JoeJulian I mean, not write to the brick, but just write to a temp directory.
02:07 Gill yea write to temp dir then rsync every minute to gluster
02:07 T3 joined #gluster
02:07 JoeJulian That would be pretty easy to do as part of the dialplan.
02:07 Gill and have the VM read from the gluster
02:07 JoeJulian Wouldn't even need a scheduled job, just put it in the dialplan.
02:08 Gill oh have the dialplan start the copy?
02:08 JoeJulian record to the tempdir, and after the connection ends, move it to the voicemail store.
02:08 JoeJulian should be pretty easy
02:08 Gill cool ill give that a try
02:08 Gill thanks man!
02:09 Gill i was getting really nervous that gluster wouldnt work
02:09 JoeJulian I wish I still had access to the configs I did at Ed Wyse. I think I've even got the framework there to demonstrate a little mock up.
02:10 Gill that would have been awesome!
02:11 Gill cool doing the recoridng locally worked great
02:14 harish joined #gluster
02:14 Gill thanks so much JoeJulian! it works!
02:15 Gill JoeJulian ++
02:24 prasanth_ joined #gluster
02:25 JoeJulian Awesome!
02:27 Gill JoeJulian: I was so nervous about it killing my whole deployment
02:28 Gill gnight JoeJulian thanks again!
02:29 Folken__ joined #gluster
02:50 meghanam joined #gluster
02:59 marcoceppi joined #gluster
02:59 marcoceppi joined #gluster
03:05 marcoceppi joined #gluster
03:05 bharata-rao joined #gluster
03:22 RameshN joined #gluster
03:28 _ndevos joined #gluster
03:28 _ndevos joined #gluster
03:53 itisravi joined #gluster
03:56 T3 joined #gluster
03:57 bala joined #gluster
04:02 redbeard joined #gluster
04:08 meghanam joined #gluster
04:11 gem joined #gluster
04:19 atinmu joined #gluster
04:36 jiffin joined #gluster
04:39 kshlm joined #gluster
04:40 harish joined #gluster
04:43 mkzero joined #gluster
04:45 Manikandan joined #gluster
04:45 Manikandan_ joined #gluster
04:48 anoopcs joined #gluster
04:49 rafi joined #gluster
04:51 spandit joined #gluster
04:52 ndarshan joined #gluster
04:57 T3 joined #gluster
04:59 rwheeler joined #gluster
05:02 ppai joined #gluster
05:02 anigeo joined #gluster
05:04 atalur joined #gluster
05:21 anil joined #gluster
05:24 itpings hi guys
05:24 itpings sup
05:24 soumya_ joined #gluster
05:26 aravindavk joined #gluster
05:26 JoeJulian cost of livin'
05:28 raghu` joined #gluster
05:31 nbalacha joined #gluster
05:33 atalur joined #gluster
05:34 schandra joined #gluster
05:42 shubhendu joined #gluster
05:42 prasanth_ joined #gluster
05:43 suman_d_ joined #gluster
05:47 ramteid joined #gluster
05:53 schandra joined #gluster
05:53 sakshi joined #gluster
05:56 ppai joined #gluster
05:57 kdhananjay joined #gluster
05:57 aravindavk joined #gluster
05:58 aravindavk joined #gluster
05:58 overclk joined #gluster
05:58 mattrixha joined #gluster
06:03 karnan joined #gluster
06:03 anil_ joined #gluster
06:09 bene_in_BLR joined #gluster
06:09 dusmant joined #gluster
06:17 maveric_amitc_ joined #gluster
06:18 hagarth joined #gluster
06:31 nshaikh joined #gluster
06:37 nbalacha joined #gluster
06:45 T3 joined #gluster
06:45 harish joined #gluster
06:45 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1163543>
06:48 prasanth_ joined #gluster
06:57 atinmu joined #gluster
07:03 mbukatov joined #gluster
07:05 nshaikh joined #gluster
07:07 tessier I'm playing with glusterfs with an eye on using it as a highly available backend for hosting Xen/KVM virtual machines. I think I set up this volume with two bricks but only 1 replica. How can I can check many replicas I have and if only 1 how do I increase it to 2?
07:13 tessier hmm...actually I think maybe I do have replication set to 2 already but the brick it should be replicating to is offline
07:13 jtux joined #gluster
07:17 tessier Killing the gluster processes and restarting got the brick back online. But one brick shows a...actually I can see the brick is now synching. Awesome.
07:17 tessier Another minute and it will be all caught up.
07:20 anrao joined #gluster
07:22 dusmant joined #gluster
07:24 tessier hmm....so I filled up the filesystem completely, deleted the file, and now it still shows 100% full.
07:30 Pupeno joined #gluster
07:31 Folken_ joined #gluster
07:31 Folken_ sync
07:35 atinmu joined #gluster
07:37 tessier Folken_: Are you suggesting that I need to sync?
07:37 Folken_ give it a try
07:38 tessier Just did. No effect.
07:40 anrao one min come here :)
07:40 JoeJulian tessier: that would suggest that the file is still open.
07:41 JoeJulian Not sure what would happen if the file was in the process of being healed when it's deleted. Perhaps that?
07:43 tessier The only thing which could be holding it open at this point is gluster itself.
07:43 Folken_ you can check
07:43 Folken_ lsof | grep filename
07:44 tessier I notice that volume heal glustertest info shows Number of entries: 2
07:45 tessier Folken_: That doesn't turn up the deleted files.
07:46 jvandewege joined #gluster
07:46 glusterbot News from resolvedglusterbugs: [Bug 1190058] folder "trusted.ec.version" can't be healed after lookup <https://bugzilla.redhat.com/show_bug.cgi?id=1190058>
07:52 tessier /export/glustertest/brick/.glusterfs/40/15/4015c0d5-3dab-4049-baac-5026c09d482b is still there on the brick servers using up all of the space
07:52 tessier What is the .glusterfs dir used for and is it safe to rm that or is there a better way to clean that up?
07:53 LebedevRI joined #gluster
07:56 * tessier rm's it just to see what happens
07:58 tessier Well, that seems to have cleared it up with no ill effects.
08:00 JoeJulian @lucky what is this .glusterfs directory
08:00 glusterbot JoeJulian: http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
08:07 nangthang joined #gluster
08:09 atalur joined #gluster
08:12 kanagaraj joined #gluster
08:12 jvandewege joined #gluster
08:26 kanagaraj joined #gluster
08:28 itpings hi
08:28 glusterbot itpings: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:29 itpings is it a good idea to create LVM first and then deploy glusterfs ?
08:29 itpings ofcouse if i want a bigger drive
08:29 itpings or to add drive later ?
08:29 itpings when storage is required
08:30 itpings and add it as another brick and mount on same vol ?
08:30 DV joined #gluster
08:31 kovshenin joined #gluster
08:34 T3 joined #gluster
08:38 ndevos itpings: a hardware RAID is recommended, and LVM+thinp on top of that makes it possible to create volume snapshots
08:38 nishanth joined #gluster
08:39 ndevos one of the best practises is to have a bricks with ~12 disks in a RAID6/10
08:40 ndevos for small deployments, you may not need snapshots
08:41 ndevos adding bricks to a volume is relatively simple, but you may want to rebalance the data afterwards (which consumes quite some bandwidth)
08:42 Philambdo joined #gluster
08:45 rwheeler joined #gluster
08:47 deniszh joined #gluster
08:48 nbalacha joined #gluster
08:50 liquidat joined #gluster
08:50 dusmant joined #gluster
08:51 ndarshan joined #gluster
08:56 ricky-ti1 joined #gluster
08:58 fsimonce joined #gluster
09:01 shylesh__ joined #gluster
09:02 jaank joined #gluster
09:02 marbu joined #gluster
09:04 hagarth joined #gluster
09:07 deniszh joined #gluster
09:10 deniszh joined #gluster
09:13 deniszh joined #gluster
09:13 iPancreas joined #gluster
09:13 iPancreas trying to setup gluster
09:15 iPancreas bumping into trouble while probing peers: "peer probe: failed: Probe returned with unknown errno 107"
09:15 iPancreas however, i can ssh into each peer among them
09:15 iPancreas and ideas?
09:15 nangthang joined #gluster
09:15 soumya_ joined #gluster
09:15 deniszh joined #gluster
09:16 ndarshan joined #gluster
09:16 glusterbot News from resolvedglusterbugs: [Bug 1173528] Change in volume heal info command output <https://bugzilla.redhat.com/show_bug.cgi?id=1173528>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1146524] glusterfs.spec.in - synch minor diffs with fedora dist-git glusterfs.spec <https://bugzilla.redhat.com/show_bug.cgi?id=1146524>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175645] [USS]: Typo error in the description for USS under "gluster volume set help" <https://bugzilla.redhat.com/show_bug.cgi?id=1175645>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175694] [SNAPSHOT]: snapshoted volume is read only but it shows rw attributes in mount <https://bugzilla.redhat.com/show_bug.cgi?id=1175694>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175728] [USS]: All uss related logs are reported under /var/log/glusterfs, it makes sense to move it into subfolder <https://bugzilla.redhat.com/show_bug.cgi?id=1175728>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175733] [USS]: If the snap name is same as snap-directory than cd to virtual snap directory fails <https://bugzilla.redhat.com/show_bug.cgi?id=1175733>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175735] [USS]: snapd process is not killed once the glusterd comes back <https://bugzilla.redhat.com/show_bug.cgi?id=1175735>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175736] [USS]:After deactivating a snapshot trying to access the remaining activated snapshots from NFS mount gives 'Invalid argument' error <https://bugzilla.redhat.com/show_bug.cgi?id=1175736>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175739] [USS]: Non root user who has no access to a directory, from NFS mount, is able to access the files under .snaps under that directory <https://bugzilla.redhat.com/show_bug.cgi?id=1175739>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175744] [USS]: Unable to access .snaps after snapshot restore after directories were deleted and recreated <https://bugzilla.redhat.com/show_bug.cgi?id=1175744>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175752] [USS]: On a successful lookup, snapd logs are filled with Warnings "dict OR key (entry-point) is NULL" <https://bugzilla.redhat.com/show_bug.cgi?id=1175752>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175753] [readdir-ahead]: indicate EOF for readdirp <https://bugzilla.redhat.com/show_bug.cgi?id=1175753>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175754] [SNAPSHOT]: before the snap is marked to be deleted if the node goes down than the snaps are propagated on other nodes and glusterd hungs <https://bugzilla.redhat.com/show_bug.cgi?id=1175754>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175755] SNAPSHOT[USS]:gluster volume set for uss doesnot check any boundaries <https://bugzilla.redhat.com/show_bug.cgi?id=1175755>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175756] [USS] : Snapd crashed while trying to access the snapshots under .snaps directory <https://bugzilla.redhat.com/show_bug.cgi?id=1175756>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175758] [USS] : Rebalance process tries to connect to snapd and in case when snapd crashes it might affect rebalance process <https://bugzilla.redhat.com/show_bug.cgi?id=1175758>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175765] [USS]: When snapd is crashed gluster volume stop/delete operation fails making the cluster in inconsistent state <https://bugzilla.redhat.com/show_bug.cgi?id=1175765>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1177418] entry self-heal in 3.5 and 3.6 are not compatible <https://bugzilla.redhat.com/show_bug.cgi?id=1177418>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1180070] [AFR] getfattr on fuse mount gives error : Software caused connection abort <https://bugzilla.redhat.com/show_bug.cgi?id=1180070>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1180411] CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were browsed at CIFS mount and Control+C is issued <https://bugzilla.redhat.com/show_bug.cgi?id=1180411>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175749] glusterfs client crashed while migrating the fds <https://bugzilla.redhat.com/show_bug.cgi?id=1175749>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1179658] Add brick fails if parent dir of new brick and existing brick is same and volume was accessed using libgfapi and smb. <https://bugzilla.redhat.com/show_bug.cgi?id=1179658>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175732] [SNAPSHOT]: nouuid is appended for every snapshoted brick which causes duplication if the original brick has already nouuid <https://bugzilla.redhat.com/show_bug.cgi?id=1175732>
09:16 glusterbot News from resolvedglusterbugs: [Bug 1175738] [USS]: data unavailability for a period of time when USS is enabled/disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1175738>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1138385] [DHT:REBALANCE]: Rebalance failures are seen with error message  " remote operation failed: File exists" <https://bugzilla.redhat.com/show_bug.cgi?id=1138385>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1175730] [USS]: creating file/directories under .snaps shows wrong error message <https://bugzilla.redhat.com/show_bug.cgi?id=1175730>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1159484] ls -alR can not heal the disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1159484>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1161885] Possible file corruption on dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1161885>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1170954] Fix mutex problems reported by coverity scan <https://bugzilla.redhat.com/show_bug.cgi?id=1170954>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1170959] EC_MAX_NODES is defined incorrectly <https://bugzilla.redhat.com/show_bug.cgi?id=1170959>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1175742] [USS]: browsing .snaps directory with CIFS fails with "Invalid argument" <https://bugzilla.redhat.com/show_bug.cgi?id=1175742>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1166505] mount fails for nfs protocol in rdma volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1166505>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1171259] mount.glusterfs does not understand -n option <https://bugzilla.redhat.com/show_bug.cgi?id=1171259>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1166515] [Tracker] RDMA support in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1166515>
09:17 ndevos *cough* that was not me!
09:17 glusterbot News from resolvedglusterbugs: [Bug 1180404] nfs server restarts when a snapshot is deactivated <https://bugzilla.redhat.com/show_bug.cgi?id=1180404>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1170548] [USS] : don't display the snapshots which are not activated <https://bugzilla.redhat.com/show_bug.cgi?id=1170548>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1170921] [SNAPSHOT]: snapshot should be deactivated by default when created <https://bugzilla.redhat.com/show_bug.cgi?id=1170921>
09:17 glusterbot News from resolvedglusterbugs: [Bug 1177899] nfs: ls shows "Permission denied" with root-squash <https://bugzilla.redhat.com/show_bug.cgi?id=1177899>
09:22 partner iPancreas: sounds like networking issue, do you have firewall active on the servers? what about connectivity between them?
09:22 iPancreas connectivity among them is fine
09:23 iPancreas im strugling a bit with centos7,
09:23 iPancreas Im guessing firewall
09:23 partner if you have not allowed the necessary traffic then surely it will fail
09:24 iPancreas Im performing a port scan and it looks like some kind of firewall is on (not iptables)
09:24 partner firewalld ?
09:24 JoeJulian iPancreas: Not sure why it's unknown. 107 is Transport Endpoint Not Connected. Probably a firewall or iptables.
09:24 JoeJulian @ports
09:24 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
09:25 iPancreas just using native gluster client, no need for smb or nfs, so that means that port 111 can stay closed
09:25 mbukatov joined #gluster
09:26 JoeJulian ndevos: I'm not going to complain if they're closed because they're actually fixed. :D
09:27 iPancreas ;)
09:28 ndevos JoeJulian: I have no idea, maybe those are the 3.6.x ones that have been fixed in the latest releases?
09:28 ndevos raghu`: did you close those? ^^
09:31 ThatGraemeGuy joined #gluster
09:37 hagarth ndevos: would you be able to pick up the community meeting today?
09:38 ndevos hagarth: yeah, I think so
09:38 * ndevos checks agenda
09:38 kaushal_ joined #gluster
09:39 hagarth ndevos: agenda to be updated, I think you know it better from last week
09:39 hagarth ndevos++ :)
09:39 glusterbot hagarth: ndevos's karma is now 8
09:39 ndevos hagarth: well, I mean *my* agenda :)
09:39 hagarth ndevos: ok! :)
09:39 ndevos hagarth: works for me, I'll update the other agenda in a bit and will send a reminder about the meeting
09:41 hagarth ndevos: thanks!
09:55 atinmu joined #gluster
09:56 dusmant joined #gluster
10:07 itpings joined #gluster
10:14 anrao joined #gluster
10:18 kanagaraj joined #gluster
10:20 dusmant joined #gluster
10:20 ndarshan joined #gluster
10:23 T3 joined #gluster
10:25 bene_in_BLR joined #gluster
10:38 kaushal_ joined #gluster
10:39 nishanth joined #gluster
10:44 anoopcs joined #gluster
10:44 ndevos REMINDER: at 12:00 UTC the weekly Gluster Community meeting will take place in #gluster-meeting
10:46 glusterbot News from newglusterbugs: [Bug 1191423] upgrade to gluster 3.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1191423>
10:46 glusterbot News from newglusterbugs: [Bug 1191437] build: issue with update of upstream build from 3.7dev-0.529 to 3.7dev-0.577 <https://bugzilla.redhat.com/show_bug.cgi?id=1191437>
10:47 ndarshan joined #gluster
10:48 dusmant joined #gluster
10:51 atinmu joined #gluster
10:57 soumya_ joined #gluster
11:01 elico joined #gluster
11:03 nbalacha joined #gluster
11:08 T3 joined #gluster
11:13 DV joined #gluster
11:16 sac`away joined #gluster
11:20 T0aD joined #gluster
11:22 dusmant joined #gluster
11:29 ilbot3 joined #gluster
11:29 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
11:29 itpings i have 2 x 6TB drives
11:29 itpings one 6TB is in use as brick vol is gv0
11:29 itpings now i want to add 6TB more to the same volume gv0 without lvm
11:29 itpings i would only add one more brick
11:29 itpings right ?
11:30 itpings and add brick to that vol ?
11:30 ndevos yes, you can use 'gluster volume add-brick' for that
11:30 itpings so meaning i dont need to lvm 2 x 6TB at first
11:31 ndevos no, you do not need to use lvm for that
11:31 itpings great
11:31 sac`away joined #gluster
11:31 itpings i am finalizing my setup with gluster and urbackup
11:31 itpings and rsync
11:31 ndevos if you would use lvm, you could add a new PV to your VG, and resize the LV+filesystem
11:32 ndevos gluster would then just pick the new size up (maybe you need to stop/start the volume or glusterfsd process for the brick)
11:32 itpings yeah that can be done but i am afraid if lvm fails or one drive died all could fail
11:32 itpings so i just want to start without lvm
11:32 ndevos indeed
11:33 ndevos yeah, that should work fine
11:33 itpings ty much appreciated
11:34 itpings i wrote a howto also for the same
11:34 itpings i was wondering if its worth for gluster community
11:34 ndevos sure, I think we appreciate all howtos and documents :)
11:34 itpings who could proof read it and post for newbies ?
11:35 ndevos you can send an email to the gluster-users@gluster.org list with that, post it on the gluster.org wiki
11:35 itpings ok sure thanks
11:35 itpings will do that after finalizing the howto but i want you guys to proof read it
11:35 ndevos many people would be able to review, maybe gluster-devel@gluster.org would be a good place to ask for reviews first
11:35 itpings coz i just started with gluster a week ago
11:36 ndevos sounds cool :)
11:37 itpings yup its good
11:47 glusterbot News from newglusterbugs: [Bug 1191486] daemons abstraction & refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=1191486>
11:47 Slashman joined #gluster
11:53 RameshN joined #gluster
11:54 dusmant joined #gluster
11:58 partner i guess meeting in couple of minutes, need to run for some coffee ->
12:00 mbukatov joined #gluster
12:00 ndevos REMINDER: Gluster Community meeting starting *now* in #gluster-meeting
12:01 [Enrico] joined #gluster
12:01 nshaikh joined #gluster
12:08 Folken_ what is the gluster meeting about?
12:15 wkf joined #gluster
12:17 glusterbot News from newglusterbugs: [Bug 1191497] Stripe translator wrongly calculating vector size and count <https://bugzilla.redhat.com/show_bug.cgi?id=1191497>
12:17 glusterbot News from resolvedglusterbugs: [Bug 1184191] DHT: Rebalance- Rebalance process crash after remove-brick <https://bugzilla.redhat.com/show_bug.cgi?id=1184191>
12:21 jdarcy joined #gluster
12:31 kkeithley joined #gluster
12:47 prasanth_ joined #gluster
12:51 dusmant joined #gluster
12:52 ndevos Folken_: see http://www.gluster.org/pipermail/gluster-users/2015-February/020648.html
12:53 kdhananjay joined #gluster
12:56 rafi joined #gluster
12:58 anoopcs joined #gluster
13:02 anoopcs_ joined #gluster
13:04 Slashman joined #gluster
13:13 bala1 joined #gluster
13:17 sac`away joined #gluster
13:27 bala1 joined #gluster
13:32 Gill joined #gluster
13:36 dusmant joined #gluster
13:37 telmich joined #gluster
13:37 anoopcs_ joined #gluster
13:41 wkf joined #gluster
13:43 drscream ndevos: i would like to thank you for your tip with an upgrade to the new version 3.4 - after the upgrade it was easy to add the additional bricks without rebalancing :-)
13:43 ndevos drscream: cool, thanks for reporting back!
13:47 glusterbot News from newglusterbugs: [Bug 1191537] With afrv2 + ext4, lookups on directories with large offsets could result in duplicate/missing entries <https://bugzilla.redhat.com/show_bug.cgi?id=1191537>
13:52 RameshN joined #gluster
13:53 ira joined #gluster
13:55 Fen1 joined #gluster
14:04 Pupeno joined #gluster
14:05 ildefonso joined #gluster
14:10 plarsen joined #gluster
14:12 mbukatov joined #gluster
14:16 dgandhi joined #gluster
14:16 plarsen joined #gluster
14:27 lalatenduM joined #gluster
14:27 Philambdo joined #gluster
14:28 georgeh-LT2 joined #gluster
14:40 drscream left #gluster
14:48 mattrixha joined #gluster
14:49 meghanam joined #gluster
15:01 liquidat joined #gluster
15:02 ricky-ti1 joined #gluster
15:02 jmarley joined #gluster
15:02 awerner joined #gluster
15:02 kdhananjay joined #gluster
15:03 bennyturns joined #gluster
15:03 T3 joined #gluster
15:04 theron joined #gluster
15:06 kshlm joined #gluster
15:07 gkleiman joined #gluster
15:07 ricky-ticky2 joined #gluster
15:11 monotek joined #gluster
15:11 monotek1 joined #gluster
15:12 monotek1 joined #gluster
15:14 ricky-ticky1 joined #gluster
15:19 neofob joined #gluster
15:25 scuttle|afk joined #gluster
15:26 kaushal_ joined #gluster
15:28 tberchenbriter_ joined #gluster
15:29 tberchenbriter_ running the docker image of gluster/gluster and getting this
15:29 tberchenbriter_ Failed to get D-Bus connection: No connection to service manager.
15:29 tberchenbriter_ Redirecting to /bin/systemctl start  glusterd.service
15:29 tberchenbriter_ Failed to get D-Bus connection: No connection to service manager.
15:30 ndevos hchiramm: is that yours? ^
15:30 hchiramm I have seen that error when I started with docker :)
15:32 hchiramm tberchenbriter_, how did u built that image
15:32 hchiramm dockerfile
15:32 hchiramm ?
15:32 tberchenbriter_ docker run
15:32 tberchenbriter_ gluster/gluster
15:33 tberchenbriter_ not sure if its happy or not, it didnt shut itself down
15:33 tberchenbriter_ should 8080 give me a web int?
15:35 gluster01 joined #gluster
15:35 gluster01 Hi there, anyone has had success mounting GlusterFS in CoreOS?
15:36 tberchenbriter_ im attempting to do it in a docker image under coreos... Id be interested in your findings though
15:37 gluster01 I am just starting to look into this. I wanted to mount it in CoreOs and share the mount inside Docker container... but looks like its not supported?
15:38 soumya joined #gluster
15:39 tberchenbriter_ Im guessing, but I think this wont ever go onto coreos, cuz you cant install packages to coreos
15:39 tberchenbriter_ so it would have to be a docker image
15:39 tberchenbriter_ then you just expose a port
15:39 tberchenbriter_ or maybe the unix socket (havent tried this yet)
15:40 harish joined #gluster
15:42 gluster01 the problem is I don't know if its possible to bring a mount from a container out into the host
15:44 tberchenbriter_ yes
15:44 tberchenbriter_ you do a -t /host/partition:/container/partition
15:44 tberchenbriter_ I mean -v
15:44 tberchenbriter_ to your docker run
15:45 tberchenbriter_ if its mounted in the container
15:45 gluster01 -v is from Host to container right?
15:45 tberchenbriter_ but if its different than regular filesystems, then maybe its something different (im new to gluster)
15:45 gluster01 How about container to host?
15:46 tberchenbriter_ when you launch docker
15:46 tberchenbriter_ so your docker run will look like this
15:47 tberchenbriter_ docker run -v {/host/part}:{/container/part} gluster/gluster
15:47 dberry joined #gluster
15:47 tberchenbriter_ you will then have /host/part (whatever you named it) on coreos
15:48 dberry joined #gluster
15:49 gluster01 tberchenbriter_: oh wow, and the /host/part you mount again into OTHER containers right?
15:49 tberchenbriter_ that may not be applicable here.. but yes if it is mounted and avail to shell then you would do that
15:50 gluster01 yeah, but what in the case where i want to share that folder into a different coreos host
15:50 gluster01 what i am trying to do is.. create 2 servers and install glusterfs cluster there
15:51 tberchenbriter_ I would guess that gluster will be communicating via one of its ports and replicating
15:51 gluster01 and on every other coreos host in the network, i mount the folder
15:51 tberchenbriter_ so you'll have to expose some ports with -p too
15:51 tberchenbriter_ im working on the same thing
15:51 gluster01 yeah but since coreOs does not support mounting the glusterfs folder
15:52 gluster01 I don't know how it will mount as a directory
15:52 gluster01 it probably can mount inside a container.. not inside coreos i guess
15:53 tberchenbriter_ does gluster work the same way as hdfs? do you have to run gluster fs ls or gluster put to use the fs?
15:53 gluster01 i am new to gluster too.. but from what i have read, in ubuntu etc, you can actually mount a network's gluster share
15:53 jobewan joined #gluster
15:54 gluster01 so that.. you have a directory in your machine that is redundant (since gluster is running on 1+ servers)
15:54 tberchenbriter_ does it show up in df?
15:54 gluster01 my plan was to mount it in all my coreos host.. and pass that inside the container using -v
15:55 gluster01 it should, since it is a fuse based i think.
15:55 tberchenbriter_ sounds promising
15:55 gluster01 i am trying to NOT use NFS because, it is not redundant.. if one nfs server goes down
15:55 fattaneh1 joined #gluster
15:55 gluster01 the disk is not accessible on 10 other servers
15:58 tberchenbriter_ hmm, but seems like we'll need this container on each host
15:59 tberchenbriter_ so they would all have to have alot of storage
15:59 gluster01 yeah but mounting should not be via container
15:59 gluster01 CoreOs should have supported it natively just like NFS
15:59 gluster01 but it does not it seems
16:00 tberchenbriter_ but gluster is a daemon that requires a package
16:00 gluster01 yes, gluster will run on 2 separate coreos containers in the network
16:01 gluster01 i am talking about 3rd and 4th server that would want to use the hdd mounted on the glusterfs cluster
16:01 tberchenbriter_ gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/
16:01 tberchenbriter_ exp4
16:01 tberchenbriter_ thats what a distributed vol looks like
16:05 gluster01 which part are you stuck in at the moment?
16:09 lalatenduM joined #gluster
16:12 fattaneh1 left #gluster
16:13 tberchenbriter_ my docker daemon doesnt look happy
16:13 tberchenbriter_ Connection failed. Please check if gluster daemon is operational.
16:13 tberchenbriter_ one of the guys at gluster is gonna help me out in a few
16:15 tberchenbriter_ gluster daemon I mean
16:20 kanagaraj joined #gluster
16:20 jobewan joined #gluster
16:22 T3 joined #gluster
16:25 hchiramm tberchenbriter_, ping, pm
16:30 hchiramm gluster01, JFY reference https://registry.hub.docker.com/u/gluster/gluster/dockerfile/
16:30 hchiramm https://github.com/humblec/dockit
16:32 squizzi joined #gluster
16:32 lmickh joined #gluster
16:33 coredump joined #gluster
16:38 shubhendu joined #gluster
16:40 gem joined #gluster
16:41 T3 joined #gluster
16:42 siel joined #gluster
16:43 telmich after upgrading from 3.4.2 to 3.6.2 I cannot start glusterd anymore and get the following error: http://pastie.org/9939999
16:43 telmich any idea on how to fix this?
16:44 meghanam joined #gluster
16:44 telmich ah, seems to be this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1119582
16:44 glusterbot Bug 1119582: high, unspecified, ---, rabhat, CLOSED CURRENTRELEASE, glusterd does not start if older volume exists
16:48 neoice left #gluster
17:05 tdasilva joined #gluster
17:05 hagarth joined #gluster
17:40 tdasilva joined #gluster
17:50 kanagaraj joined #gluster
17:57 rcampbel3 joined #gluster
18:09 Fen1 joined #gluster
18:22 theron joined #gluster
18:26 _dist joined #gluster
18:33 tdasilva joined #gluster
18:35 Rapture joined #gluster
18:44 PeterA joined #gluster
18:47 MacWinner joined #gluster
18:48 MacWinner joined #gluster
18:48 maveric_amitc_ joined #gluster
18:50 neofob joined #gluster
19:11 free_amitc_ joined #gluster
19:25 ws2k3_ joined #gluster
19:26 fattaneh1 joined #gluster
19:28 virusuy joined #gluster
19:28 virusuy joined #gluster
19:32 ricky-ticky joined #gluster
19:49 diegows joined #gluster
20:03 fattaneh1 left #gluster
20:31 deniszh joined #gluster
20:33 gildub joined #gluster
20:35 SOLDIERz joined #gluster
20:49 badone_ joined #gluster
20:54 huleboer joined #gluster
21:01 andreask joined #gluster
21:19 mbukatov joined #gluster
21:19 dbruhn joined #gluster
21:43 gildub_ joined #gluster
22:00 huleboer joined #gluster
22:15 Pupeno joined #gluster
22:15 Pupeno joined #gluster
22:17 huleboer joined #gluster
22:26 alan^ joined #gluster
22:40 unixfg joined #gluster
22:45 badone_ joined #gluster
22:46 side_control joined #gluster
22:47 alan^ Anyone around? I have some doubts about how vol files work. I understand that I have to "volume sync" to get them up to date. That said, if I modify a vol file are the changes in effect immediately or do I need to restart the daemon and if so what's the cleanest way?
22:47 mattrixha left #gluster
22:48 alan^ also, in my distributed setup each gluster node has its own copy of the vol files. Does that mean I need to update all of them simultaneously or do I just make changes in one place, volume sync and all is well?
22:48 alan^ I see a bunch under /var/lib/glusterd/vols/myvolname
22:49 Pupeno_ joined #gluster
22:54 rcampbel3 joined #gluster
23:05 siel joined #gluster
23:06 Pupeno joined #gluster
23:08 gildub_ joined #gluster
23:17 Pupeno joined #gluster
23:22 n-st joined #gluster
23:28 Pupeno_ joined #gluster
23:36 Pupeno joined #gluster
23:37 jessexoc joined #gluster
23:40 Pupeno joined #gluster
23:45 T3 joined #gluster
23:46 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary