1. PROBLEM INTRODUCTION
Recommender Systems(RSs) are software tools and techniques suggesting to a user a well selectedset of items matching the user’s taste and preferences. The suggestions relateto various decision-making processes, such as what items to buy, what music tolisten to, or what online news to read. They are widely used in Web-basedecommerce applications to help online users to choose the most suitableproducts, e.g. movies, CDs, books, or travels.

The large majority ofRSs are designed to make recommendations for individual users. Sincerecommendations are usually personalized, different users receive diversesuggestions. However, insome circumstances the items to be selected are not intended for personal usagebut for a group of users; e.g., a DVD could be watched by a group offriends or in a family. These groups can vary from stable groups to adhocgroups requiring recommendations only occasionally.For this reason someworks have addressed the problem of identifying recommendations “good for” agroup of users, i.e., trying to satisfy, as much as possible, the individual preferencesof all the group’s members. Grouprecommendation approaches are either based on the generation of an integratedgroup profile or on the integration of recommendations built for each memberseparately.
A major issue in thisresearch area relates to the difficulty of evaluating the effectiveness ofgroup recommendations, i.e., comparing the generated recommendations for agroup with the true preferences of the individual members. One general approach forsuch an evaluation consists of interviewing real users. In this approach there are two options:either to acquire the users’ individual evaluations for the grouprecommendations and then integrate (e.g., averaging) these evaluations into ascore that the group “jointly” assigns to the recommendations; or to acquiredirectly a joint evaluation of the group for the recommendations. In the firstcase one must decide how the individual evaluations are integrated; this isproblematic as different methods will produce different results and there is nosingle best way to perform such an integration. Another difficulty, which is commonto both options is related to the fact that the satisfaction of an individualis likely to depend on that of other individuals in the group (emotional contagion).Moreover, on-line evaluations can be performed on a very limited set of testcases and cannot be used to extensively test alternative algorithms.
A second approach consists of performing off-line evaluations, wheregroups are sampled from the users of a traditional (i.e., single-user) RS.Group recommendations are offered to group members and are evaluatedindependently by them, as in the classical single user case, by comparing thepredicted ratings (rankings) with the ratings (rankings) observed in the testset of the user. The group recommendations are generated to suit simultaneouslythe preferences of all the users in the group and our intuition suggests thatthey cannot be as good as the individually tailored recommendations. We observethat using this evaluation approach, in order to test the effectiveness of agroup recommendation, we do not need the joint group evaluations for therecommended items, and we can reuse the most popular data sets (e,g, Movielens or Netflix) that contain justevaluations (ratings) of individual users.
In this paper we followthe second approach evaluating a set of novel group recommender techniquesbased on rank aggregation. We first generate syntheticgroups (with various criteria), then we build recommendation lists for these groups, andfinally we evaluate thesegroup recommendations on the test sets of the users. The grouprecommendations are built in two steps: first a collaborative filtering algorithmproduces individual users’ rating predictions for the test items andconsequently individual ranking predictions based on these rating predictionsare generated. Then a rank aggregation method generates a joint ranking of the itemsrecommended to the group by integrating the individual ranking predictionscomputed in the previous step. We then measure, for each group member, how goodthis integrated ranking is, and if this is worse than the initial individual rankingbuilt by the system by taking into account only the ratings contained in theprofile of the user. We performed an analysis of the generated grouprecommendations (ranking) varying the size of the groups, the inner group memberssimilarity, and the rank aggregation mechanism.
2. METHODS FOR GROUP RECOMMENDATIONS
Our group recommendationmethod is based on the ordinal ranking of items, i.e., the result of a grouprecommendation process is an ordered list of items. To generate the ranking weuse rank aggregation methods, taking a set of predicted ranked lists, one for eachgroup member, and producing one combined and ordered recommendations’ list.
Most of the availablerank aggregation methods are inspired by results obtained in Social ChoiceTheory. This theory studies how individual preferences could be aggregated toreach a collective consensus. Social choice theorists are concerned withcombining ordinal rankings rather than ratings. Thus, they search for asocietal preference result, where output is obtained combining several peopleordered preferences on the same set of choices. For instance, in elections anumber of ranked candidates, provided by voters, is collected and aggregatedinto one final ranked electors’ list. Arrow proved that this aggregation taskis impossible, if the combined preferences must satisfy a few compelling properties.His theorem states that there cannot be one perfect aggregation method, andthis allows for a variety of different aggregation methods to exist.
3. PROJECTS, REFERENCESAND RESOURCES
1) OpenSource Project
Surprise: https://pypi.org/project/scikit-surprise/
Surprise is a Pythonscikit building and analyzing recommender systems.
Surprise was designedwith the following purposes in mind:
Give users perfectcontrol over their experiments. To this end, a strong emphasis is laid ondocumentation, which w
e have tried to make as clear and precise as possible bypointing out every detail of the algorithms.
广告
桂圆红枣枸杞 特级桂圆肉 宁夏特级枸杞 红枣桂圆枸杞茶礼盒装
开心笑脸精选
更换模版
Alleviate the pain ofDataset handling. Users can use both built-in datasets (Movielens, Jester), andtheir own custom datasets.
Provide variousready-to-use prediction algorithms such as baseline algorithms, neighborhoodmethods, matrix factorization-based ( SVD, PMF, SVD++, NMF), and many others.Also, various similarity measures (cosine, MSD, pearson...) are built-in.
Make it easy toimplement new algorithm ideas.
Provide tools toevaluate, analyse and compare the algorithms performance. Cross-validationprocedures can be run very easily using powerful CV iterators (inspired byscikit-learn excellent tools), as well as exhaustive search over a set ofparameters.
The name SurPRISE stands for Simple Python RecommendatIonSystem Engine.
2)MovieLens Latest Datasets
These datasets willchange over time, and are not appropriate for reporting researchresults. We will keep the download links stable for automated downloads.We will not archive or make available previously released versions. (https://grouplens.org/datasets/movielens/)
Small: 100,000 ratings and3,600 tag applications applied to 9,000 movies by 600 users. Last updated9/2018.
ml-latest-small.zip (size: 1 MB)
Full: 27,000,000 ratings and1,100,000 tag applications applied to 58,000 movies by 280,000 users. Includestag genome data with 14 million relevance scores across 1,100 tags. Lastupdated 9/2018.
ml-latest.zip (size: 265 MB)
Permalink: https://grouplens.org/datasets/movielens/latest/
3) Downloadof camra2011, MovieLens and Papers
2010 GroupRecommendations with Rank Aggregation and Collaborative Filtering
camra2011
MovieLens
4、Service
If you have any question,please enter into the group of QQ (num:625686304); or WeChat Public Account: douAsk。Scan the following picture tofollow our WeChat Public Account, and you can download the resources and readthe articles.

If you input the paintedegg number, you will get the download address of the resources.
The paintedegg number of this resource is 9100.`

