Recommend | disassembly of MSAR nearest neighbor collaborative filtering algorithm (2)

Understanding oneself 2020-11-13 10:09:02
recommend disassembly msar nearest neighbor


recommend | Microsoft SAR Analysis of nearest neighbor collaborative filtering algorithm ( One ) The previous article introduces the whole SAR Algorithm , The algorithm itself is easier to understand . This article is mainly about the interesting small functions .



1 For diagonal matrices jaccard / lift

This happened in C C C matrix co-occurence matrix become S S S matrix item similarity matrix, It's about the correlation between each other .

Once we have a co-occurrence matrix , The similarity matrix can be obtained by rescaling the co-occurrence according to the given measure :Jaccard, lift, and counts ( It's counting , In fact, it has not changed , No compression / The zoom ).

If c i i c_{ii} cii and c j j c_{jj} cjj yes matrix C C C Of the i i i individual and The first j j j A diagonal element , Then the rescale option is :

  • Jaccard: s i j = c i j ( c i i + c j j − c i j ) s_{ij}=\frac{c_{ij}}{(c_{ii}+c_{jj}-c_{ij})} sij=(cii+cjjcij)cij
  • lift: s i j = c i j ( c i i × c j j ) s_{ij}=\frac{c_{ij}}{(c_{ii} \times c_{jj})} sij=(cii×cjj)cij
  • counts: s i j = c i j s_{ij}=c_{ij} sij=cij

The formula is as follows :

import numpy as np
def jaccard(cooccurrence):
"""Helper method to calculate the Jaccard similarity of a matrix of co-occurrences.
Args:
cooccurrence (np.array): the symmetric matrix of co-occurrences of items.
Returns:
np.array: The matrix of Jaccard similarities between any two items.
"""
diag = cooccurrence.diagonal()
diag_rows = np.expand_dims(diag, axis=0)
diag_cols = np.expand_dims(diag, axis=1)
with np.errstate(invalid="ignore", divide="ignore"):
result = cooccurrence / (diag_rows + diag_cols - cooccurrence)
return np.array(result)
def lift(cooccurrence):
"""Helper method to calculate the Lift of a matrix of co-occurrences.
Args:
cooccurrence (np.array): the symmetric matrix of co-occurrences of items.
Returns:
np.array: The matrix of Lifts between any two items.
"""
diag = cooccurrence.diagonal()
diag_rows = np.expand_dims(diag, axis=0)
diag_cols = np.expand_dims(diag, axis=1)
with np.errstate(invalid="ignore", divide="ignore"):
result = cooccurrence / (diag_rows * diag_cols)
return np.array(result)

Here must be the calculation of two diagonal matrices ,

import numpy as np
from scipy import sparse
data = [[1,3,5],[3,1,6],[5,6,1]]
x = np.array(data)
x
# For a diagonal matrix jaccard 、 lift
jaccard(x) # It has to be square
lift(x)

among jaccard Output :

jaccard(x) # It has to be square
Out[63]:
array([[ 1. , -3. , -1.66666667],
[-3. , 1. , -1.5 ],
[-1.66666667, -1.5 , 1. ]])

2 Matrix access top-k function

def get_top_k_scored_items(scores, top_k, sort_top_k=False):
"""Extract top K items from a matrix of scores for each user-item pair,
{optionally sort results per user.
Args:
scores (np.array): score matrix (users x items).
top_k (int): number of top items to recommend.
sort_top_k (bool): flag to sort top k results.
Returns:
np.array, np.array: indices into score matrix for each users top items, scores corresponding to top items.
"""
# ensure we're working with a dense ndarray
if isinstance(scores, sparse.spmatrix):
scores = scores.todense()
if scores.shape[1] < top_k:
logger.warning(
"Number of items is less than top_k, limiting top_k to number of items"
)
k = min(top_k, scores.shape[1])
test_user_idx = np.arange(scores.shape[0])[:, None]
# get top K items and scores
# this determines the un-ordered top-k item indices for each user
top_items = np.argpartition(scores, -k, axis=1)[:, -k:]
top_scores = scores[test_user_idx, top_items]
if sort_top_k:
sort_ind = np.argsort(-top_scores)
top_items = top_items[test_user_idx, sort_ind]
top_scores = top_scores[test_user_idx, sort_ind]
return np.array(top_items), np.array(top_scores)

In a user-item In the matrix of , Take out the most important top-k Of item.
Example :

import numpy as np
from scipy import sparse
data = [[1,3,5],[3,1,6],[5,6,1]]
x = np.array(data)
get_top_k_scored_items(x, 1, sort_top_k=False) # Extract the maximum value by row

give the result as follows :

(array([[2],
[2],
[1]], dtype=int64),
array([[5],
[6],
[6]]))

first line ( first ) The maximum number of users is 2( Matrix one ), The value is 5( Matrix two ).


3 sparse Sparse matrix construction

Before that, I also studied sparse matrix ,scipy.sparse、pandas.sparse、sklearn The use of sparse matrices , Just by the way SAR How to use :

utilize coo_matrix Form a matrix -> Turn into csr Calculate

Intercept sar_singlenode.py The code in :

# generate pseudo user affinity using seed items
pseudo_affinity = sparse.coo_matrix(
(ratings, (user_ids, item_ids)), shape=(n_users, self.n_items)
).tocsr()
# calculate raw scores with a matrix multiplication
test_scores = pseudo_affinity.dot(self.item_similarity)
# remove items in the seed set so recommended items are novel
test_scores[user_ids, item_ids] = -np.inf
top_items, top_scores = get_top_k_scored_items(
scores=test_scores, top_k=top_k, sort_top_k=sort_top_k
)
get_top_k_scored_items(pseudo_affinity, 1, sort_top_k=False)

Here is a simple example of generation :

# sparse matrix
ratings = [5,4,3]
user_ids = [0,1,2]
item_ids = [1,2,1]
n_users = len(user_ids)# Scribble , It's better than max(user_ids) Big
n_items = 10 # Scribble , It's better than max(item_ids) Big
pseudo_affinity = sparse.coo_matrix(
(ratings, (user_ids, item_ids)), shape=(n_users, n_items)
).tocsr()
pseudo_affinity.todense()
pseudo_affinity.toarray()

The result is :

array([[0, 5, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 4, 0, 0, 0, 0, 0, 0, 0],
[0, 3, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)

among ( Reference resources :python scipy Sparse matrix details ):
 Insert picture description here

csr_matrix It can be used in various arithmetic operations : It supports addition , Subtraction , Multiplication , Division and matrix idempotent operations . It has five instantiation methods , The first four initialization methods are similar coo_matrix, That is, by building dense matrices 、 Through other types of sparse matrix transformation 、 Build a certain shape The empty matrix of 、 adopt (row, col, data) Build the matrix . Its fifth initialization mode is directly reflected in csr_matrix Storage characteristics of :csr_matrix((data, indices, indptr), [shape=(M, N)]), intend , In matrix i The column number of a non-zero row element is indices[indptr[i]:indptr[i+1]], The corresponding value is data[indptr[i]:indptr[i+1]]

>>> import numpy as np
>>> from scipy.sparse import csr_matrix
>>> indptr = np.array([0, 2, 3, 6])
>>> indices = np.array([0, 2, 2, 0, 1, 2])
>>> data = np.array([1, 2, 3, 4, 5, 6])
>>> csr = csr_matrix((data, indices, indptr), shape=(3, 3)).toarray()
array([[1, 0, 2],
[0, 0, 3],
[4, 5, 6]])

csr_matrix There are also many ways , among tobytes(),tolist(), tofile(),tostring() It is worth noting that , For other details, please refer to the official documents ,csr_matrix The first five attributes of an object are the same as coo_matrix, There are also properties as follows :

  • indices

    And properties data One-to-one correspondence , The element value represents the column number in a row

  • indptr

    csr_matrix The starting value of each row ,length(csr_object.indptr) == csr_object.shape[0] + 1

  • has_sorted_indices

    Judge the line of indices Whether it's orderly , return bool value

  • csr_matrix The advantages of :

    • Efficient arithmetic operations CSR + CSR,CSR * CSR etc.
    • Efficient line slicing
    • Fast matrix operations
  • csr_matrix The shortcomings of :

    • Column slicing is slow ( consider csc_matrix)
    • The conversion of sparse structure is slow ( consider lil_matrix or doc_matrix)

4 Some evaluation indexes :NDCG、MAP、MRR、HR、ILS、ROC、AUC、F1 etc.

4.1 Hit Ratio(HR)

stay top-K In recommendation ,HR Is a commonly used index to measure the recall rate , The formula is :
 Insert picture description here
The denominator is all the test sets , The numerator represents each user top-K The sum of the number of test sets in the list .

A simple example , The number of products in the test set for the three users are respectively 10,12,8, The model gets top-10 In the recommended list , There were 6 individual ,5 individual ,4 It's in the test set , So at this time HR The value of is
(6+5+4)/(10+12+8) = 0.5.

Related definitions :

def hit(gt_items, pred_items):
count = 0
for item in pred_items:
if item in gt_items:
count += 1
return count

4.2 Mean Average Precision(MAP)

Average accuracy AP, If we use google Search for a keyword , Back to 10 results . The best case, of course, is this 10 The results are all the relevant information we want . But if only part of it is relevant , such as 5 individual , So this 5 A result is also a relatively good result if it is shown in the front . But if this 5 Related information from 6 Results returned , Then this situation is relatively poor . This is AP The indicators reflected , And recall There's something similar about the concept of , But is “ Sequence sensitive recall.

For the user uu, Recommend something to him , that uu The average accuracy of is :
 Insert picture description here

def AP(ranked_list, ground_truth):
"""Compute the average precision (AP) of a list of ranked items
"""
hits = 0
sum_precs = 0
for n in range(len(ranked_list)):
if ranked_list[n] in ground_truth:
hits += 1
sum_precs += hits / (n + 1.0)
if hits > 0:
return sum_precs / len(ground_truth)
else:
return 0

MAP Represents all users uu Of AP Take the mean again , The calculation formula is as follows :
 Insert picture description here

4.3 Normalized Discounted Cummulative Gain(NDCG)

Cumulative gain CG, In the recommendation system CG It means that the score of the relevance of each recommendation result is accumulated as the score of the whole recommendation list :

 Insert picture description here
among ,rel Indicate location i The relevance of the recommended results of ,k Indicates the size of the recommended list .
CG The effect of different positions of each recommendation result on the whole recommendation result is not considered , for example , We always want results that are highly relevant to come first , Low relevancy at the top affects the user experience .

Suppose search “ Basketball ” result , The ideal result is :B1, B2, B3; And the result is B3, B1,
B2 Words ,CG There is no change in the value of , So we need the following DCG.

DCG stay CG On the basis of the introduction of location factors , The calculation formula is as follows :

 Insert picture description here
From the above formula we can get :1) The more relevant the recommendation results are ,DCG The bigger it is .2) Those with good relevance are at the top of the recommendation list , Better recommendation ,DCG The bigger it is .

DCG It's difficult to evaluate horizontally between different recommendation lists , However, it is impossible for us to evaluate a recommendation system only by using a user's recommendation list and corresponding results , Instead, it evaluates the users and their recommended list results throughout the test set . that , The evaluation scores of different users' recommendation lists need to be normalized , That is to say NDCG.

IDCG Represents a list of the best recommendation results returned by a user of the recommendation system , That is, suppose the returned results are sorted according to the relevance , The most relevant results come first , Of this sequence DCG by IDCG. therefore DCG The value is between (0,IDCG] , so NDCG The value is between (0,1], So users u Of NDCG@K Defined as :

 Insert picture description here
Average NDCG The value of is :
 Insert picture description here

For example :
Suppose the search comes back 5 results , The correlation scores are 3、2、3、0、1、2

that CG = 3+2+3+0+1+2

It can be seen that only a related score is given for the relevant score , There is no recall location effect on ranking result score . And we see DCG:
 Insert picture description here
therefore DCG = 3+1.26+1.5+0+0.38+0.71 = 6.86

Next we normalize , Normalization needs to be settled first IDCG, If we actually recall 8 Items , Except for the top 6 individual , There are two other results , Hypothesis number 1 7 The correlation is 3, The first 8 The correlation is 0. So in the ideal case, the correlation score should be :3、3、3、2、2、1、0、0. Calculation IDCG@6:

 Insert picture description here
therefore IDCG = 3+1.89+1.5+0.86+0.77+0.35 = 8.37

so Final NDCG@6 = 6.86/8.37 = 81.96%


5 Time counting function

You can go through %time To time , Here's a :

from timeit import default_timer
from datetime import timedelta
import time
class Timer(object):
"""Timer class.
`Original code <https://github.com/miguelgfierro/pybase/blob/2298172a13fb4a243754acbc6029a4a2dcf72c20/log_base/timer.py>`_.
Examples:
>>> import time
>>> t = Timer()
>>> t.start()
>>> time.sleep(1)
>>> t.stop()
>>> t.interval < 1
True
>>> with Timer() as t:
... time.sleep(1)
>>> t.interval < 1
True
>>> "Time elapsed {}".format(t) #doctest: +ELLIPSIS
'Time elapsed 1...'
"""
def __init__(self):
self._timer = default_timer
self._interval = 0
self.running = False
def __enter__(self):
self.start()
return self
def __exit__(self, *args):
self.stop()
def __str__(self):
return "{:0.4f}".format(self.interval)
def start(self):
"""Start the timer."""
self.init = self._timer()
self.running = True
def stop(self):
"""Stop the timer. Calculate the interval in seconds."""
self.end = self._timer()
try:
self._interval = self.end - self.init
self.running = False
except AttributeError:
raise ValueError(
"Timer has not been initialized: use start() or the contextual form with Timer() as t:"
)
@property
def interval(self):
"""Get time interval in seconds.
Returns:
float: Seconds.
"""
if self.running:
raise ValueError("Timer has not been stopped, please use stop().")
else:
return self._interval

After this execution, we can see that :

# perform
with Timer() as t:
time.sleep(1)
# The elapsed time
t.interval

In terms of structure, there are many aspects , You can use it later


reference :

Recommendation algorithm commonly used evaluation index :NDCG、MAP、MRR、HR、ILS、ROC、AUC、F1 etc.
Search for metrics ——NDCG

版权声明
本文为[Understanding oneself]所创,转载请带上原文链接,感谢

  1. [front end -- JavaScript] knowledge point (IV) -- memory leakage in the project (I)
  2. This mechanism in JS
  3. Vue 3.0 source code learning 1 --- rendering process of components
  4. Learning the realization of canvas and simple drawing
  5. gin里获取http请求过来的参数
  6. vue3的新特性
  7. Get the parameters from HTTP request in gin
  8. New features of vue3
  9. vue-cli 引入腾讯地图(最新 api,rocketmq原理面试
  10. Vue 学习笔记(3,免费Java高级工程师学习资源
  11. Vue 学习笔记(2,Java编程视频教程
  12. Vue cli introduces Tencent maps (the latest API, rocketmq)
  13. Vue learning notes (3, free Java senior engineer learning resources)
  14. Vue learning notes (2, Java programming video tutorial)
  15. 【Vue】—props属性
  16. 【Vue】—创建组件
  17. [Vue] - props attribute
  18. [Vue] - create component
  19. 浅谈vue响应式原理及发布订阅模式和观察者模式
  20. On Vue responsive principle, publish subscribe mode and observer mode
  21. 浅谈vue响应式原理及发布订阅模式和观察者模式
  22. On Vue responsive principle, publish subscribe mode and observer mode
  23. Xiaobai can understand it. It only takes 4 steps to solve the problem of Vue keep alive cache component
  24. Publish, subscribe and observer of design patterns
  25. Summary of common content added in ES6 + (II)
  26. No.8 Vue element admin learning (III) vuex learning and login method analysis
  27. Write a mini webpack project construction tool
  28. Shopping cart (front-end static page preparation)
  29. Introduction to the fluent platform
  30. Webpack5 cache
  31. The difference between drop-down box select option and datalist
  32. CSS review (III)
  33. Node.js学习笔记【七】
  34. Node.js learning notes [VII]
  35. Vue Router根据后台数据加载不同的组件(思考-&gt;实现-&gt;不止于实现)
  36. Vue router loads different components according to background data (thinking - & gt; Implementation - & gt; (more than implementation)
  37. 【JQuery框架,Java编程教程视频下载
  38. [jQuery framework, Java programming tutorial video download
  39. Vue Router根据后台数据加载不同的组件(思考-&gt;实现-&gt;不止于实现)
  40. Vue router loads different components according to background data (thinking - & gt; Implementation - & gt; (more than implementation)
  41. 【Vue,阿里P8大佬亲自教你
  42. 【Vue基础知识总结 5,字节跳动算法工程师面试经验
  43. [Vue, Ali P8 teaches you personally
  44. [Vue basic knowledge summary 5. Interview experience of byte beating Algorithm Engineer
  45. 【问题记录】- 谷歌浏览器 Html生成PDF
  46. [problem record] - PDF generated by Google browser HTML
  47. 【问题记录】- 谷歌浏览器 Html生成PDF
  48. [problem record] - PDF generated by Google browser HTML
  49. 【JavaScript】查漏补缺 —数组中reduce()方法
  50. [JavaScript] leak checking and defect filling - reduce() method in array
  51. 【重识 HTML (3),350道Java面试真题分享
  52. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  53. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  54. [re recognize HTML (3) and share 350 real Java interview questions
  55. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  56. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  57. 【重识 HTML ,nginx面试题阿里
  58. 【重识 HTML (4),ELK原来这么简单
  59. [re recognize HTML, nginx interview questions]
  60. [re recognize HTML (4). Elk is so simple