@Ahmed and @dollajas , I talked a bit about this idea briefly with @andrew . I think it would be a good idea to leverage objective data on acceptance/rejection rates for each user. From the maintainer perspective it would help us establish a trust level for users, help us find future maintainers, and in rare cases help us find users that are routinely submitting poor suggestions. From the user perspective this can be used to build out an achievement/award system that would encourage more user interaction. This could be expanded to include awards for adding a thumbs up to a post that is eventually accepted or thumbs down to one that is rejected. @andrew informed me much of this info is already being collected and could be made available to maintainers and possibly everyone if we build out a dashboard for it.
Little features I think we would need to include are:
- Manual button to prevent decision from being counted against stats. If 2 people suggest a good edit, one has to be rejected and that should not count against them. Self deleting a post should also not count.
- Bulk suggestion should be excluded from stats?
- Stat/achievement categories are separated by suggestion type. For example, someone who is great at tags may be bad at clinical content/format.
Going off the last one, I think it would also benefit us to separate the types of suggestions better for all users. I know you can filter by 1 suggestion type but I think that aspect could be improved. Part of the lack of involvement is likely due to suggestions that are good for discussion are lost in the sheer volume of other posts that people are less interested in discussing. A start would be separating the tag suggestions and image replacement stuff. Ideally, users could select more than one type of suggestion they want to see for each deck and have that save as default.
Would love to hear others thoughts on this.