predicting popularity of online articles using random forest...

6
Predicting Popularity of Online Articles using Random Forest Regression R. Shreyas, D.M Akshata, B.S Mahanand, B. Shagun, C.M Abhishek Department of Information Science and Engineering Sri Jayachamarajendra College of Engineering Mysuru, India Abstract—Predictive analysis using machine learning has been gaining popularity in recent times. In this paper, the Random Forest regression model is used to predict popularity of articles from the Online News Popularity data set. The performance of the Random Forest model is investigated and compared with other models. Impact of standardization, regularization, correlation, high bias/high variance and feature selection on the learning models are also studied. Results indicate that, the Random Forest approach predicts popular/unpopular articles with an accuracy of 88.8%. I. I NTRODUCTION Predictive analysis has been vital to the growth of com- panies like Target, Amazon and Netflix in identifying cus- tomer behavior through Business Intelligence models. These instances suggest clearly that there are business opportunities to be harnessed in the field of predictive analysis. Applying predictive analysis to articles on the web can be vital in extending a creator into a large-scale distributor of his/her content. According to Kelwin et. al[1], current marketing techniques make use of a business intelligence approach called Decision Support Systems (DSS) to predict popularity before the actual marketing of the product. DSS is an information system that supports business or decision-making activities for organi- zations. Adaptive Business Intelligence (ABI) provides the power of prediction and optimization working together to increase popularity of articles on the web. Thus, knowing in advance the articles which are likely to become popular, based on the article content, is vital for business intelligence approaches. A low share count is a clear indication to make changes to the articles content to improve popularity. Social media platforms have given new opportunities for users to become large-scale distributors of their content[2]. According to the Pew Research Center, social media usage between 2005-2015 shows that 65% of adults now use social networking sites - a nearly tenfold jump in the past decade. Although social media platforms provide new opportunities for users to easily distribute their content, they have also led to a huge bias in terms of number of articles getting popular. A minuscule percentage of the articles become popular, and their share counts increase rapidly, almost exponentially, leaving the other articles virtually unknown to social media users. Understanding what makes one article more popular than another, observing its popularity dynamics, and being able to predict its popularity has thus attracted a lot of interest in the past few years. Prediction of article popularity, especially those from Mash- able, have been studied by Kelwin et. al[1] and Dumont et. al[3]. However, a detailed comparison and analysis of the most popular regression models applied on Mashable articles has not yet been studied. Hence, there is a need to experiment further to produce new and interesting results in predictive analysis using machine learning regression models, applied to articles from Mashable. This paper aims to harness machine learning algorithms to predict share counts of Mashable articles. The experiments focus on predicting a single value, the share count. Share count will refer to the value obtained from Mashable for the article under scrutiny and represents the number of times the article is shared across any social media platform. The remainder of the paper is organized as follows. Section 2 elucidates the source of the data, the number of features, data type of the feature values and data visualization. Section 3 uncovers the Random Forest model, it’s optimization, the comparison of thirteen regression models, and analysis of standardization, feature selection, correlation, regularization and learning curves applied to the models. Section 4 discusses the main results and Section 5 concludes the paper with future work in the area of predictive analysis applied to text data from twitter. II. DATA SET &DATA VISUALIZATION The data was collected from the UC Irvine machine learning repository[4]. The Online News Popularity (ONP) data set consists of 39,797 articles’ textual extracted data, each con- taining 61 attributes (58 predictive attributes, 2 non-predictive, 1 goal field). The features are either integer or real. The data set does not contain any missing values. To understand the data, a scatter plot was created and the share distribution was visualized. From Figure 1, it is evident that most of the articles lie in a band of comparatively low share count values, while a handful of articles are outliers - popular articles. The distribution of share counts of articles from 6 topics - Lifestyle, Entertain- ment, Business, Social Media, Technology and World were analyzed. Table I shows that Business articles dominate the 2016 Second International Conference on Cognitive Computing and Information Processing (CCIP) 978-1-5090-1025-7/16/$31.00 ©2016 IEEE

Upload: others

Post on 15-Oct-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Predicting popularity of online articles using Random Forest …download.xuebalib.com/xuebalib.com.31410.pdf · Fig. 1. Scatter plot showing the distribution of share counts of the

Predicting Popularity of Online Articles usingRandom Forest Regression

R. Shreyas, D.M Akshata, B.S Mahanand, B. Shagun, C.M Abhishek

Department of Information Science and EngineeringSri Jayachamarajendra College of Engineering

Mysuru, India

Abstract—Predictive analysis using machine learning has beengaining popularity in recent times. In this paper, the RandomForest regression model is used to predict popularity of articlesfrom the Online News Popularity data set. The performanceof the Random Forest model is investigated and comparedwith other models. Impact of standardization, regularization,correlation, high bias/high variance and feature selection onthe learning models are also studied. Results indicate that, theRandom Forest approach predicts popular/unpopular articleswith an accuracy of 88.8%.

I. INTRODUCTION

Predictive analysis has been vital to the growth of com-panies like Target, Amazon and Netflix in identifying cus-tomer behavior through Business Intelligence models. Theseinstances suggest clearly that there are business opportunitiesto be harnessed in the field of predictive analysis. Applyingpredictive analysis to articles on the web can be vital inextending a creator into a large-scale distributor of his/hercontent.According to Kelwin et. al[1], current marketing techniquesmake use of a business intelligence approach called DecisionSupport Systems (DSS) to predict popularity before the actualmarketing of the product. DSS is an information system thatsupports business or decision-making activities for organi-zations. Adaptive Business Intelligence (ABI) provides thepower of prediction and optimization working together toincrease popularity of articles on the web. Thus, knowingin advance the articles which are likely to become popular,based on the article content, is vital for business intelligenceapproaches. A low share count is a clear indication to makechanges to the articles content to improve popularity.Social media platforms have given new opportunities forusers to become large-scale distributors of their content[2].According to the Pew Research Center, social media usagebetween 2005-2015 shows that 65% of adults now use socialnetworking sites - a nearly tenfold jump in the past decade.Although social media platforms provide new opportunities forusers to easily distribute their content, they have also led toa huge bias in terms of number of articles getting popular. Aminuscule percentage of the articles become popular, and theirshare counts increase rapidly, almost exponentially, leavingthe other articles virtually unknown to social media users.Understanding what makes one article more popular than

another, observing its popularity dynamics, and being able topredict its popularity has thus attracted a lot of interest in thepast few years.Prediction of article popularity, especially those from Mash-able, have been studied by Kelwin et. al[1] and Dumont et.al[3]. However, a detailed comparison and analysis of the mostpopular regression models applied on Mashable articles hasnot yet been studied. Hence, there is a need to experimentfurther to produce new and interesting results in predictiveanalysis using machine learning regression models, applied toarticles from Mashable.This paper aims to harness machine learning algorithms topredict share counts of Mashable articles. The experimentsfocus on predicting a single value, the share count. Share countwill refer to the value obtained from Mashable for the articleunder scrutiny and represents the number of times the articleis shared across any social media platform.The remainder of the paper is organized as follows. Section2 elucidates the source of the data, the number of features,data type of the feature values and data visualization. Section3 uncovers the Random Forest model, it’s optimization, thecomparison of thirteen regression models, and analysis ofstandardization, feature selection, correlation, regularizationand learning curves applied to the models. Section 4 discussesthe main results and Section 5 concludes the paper with futurework in the area of predictive analysis applied to text data fromtwitter.

II. DATA SET & DATA VISUALIZATION

The data was collected from the UC Irvine machine learningrepository[4]. The Online News Popularity (ONP) data setconsists of 39,797 articles’ textual extracted data, each con-taining 61 attributes (58 predictive attributes, 2 non-predictive,1 goal field). The features are either integer or real. The dataset does not contain any missing values.To understand the data, a scatter plot was created and the sharedistribution was visualized.From Figure 1, it is evident that most of the articles lie in aband of comparatively low share count values, while a handfulof articles are outliers - popular articles. The distribution ofshare counts of articles from 6 topics - Lifestyle, Entertain-ment, Business, Social Media, Technology and World wereanalyzed. Table I shows that Business articles dominate the

2016 Second International Conference on Cognitive Computing and Information Processing (CCIP)

978-1-5090-1025-7/16/$31.00 ©2016 IEEE

Page 2: Predicting popularity of online articles using Random Forest …download.xuebalib.com/xuebalib.com.31410.pdf · Fig. 1. Scatter plot showing the distribution of share counts of the

Fig. 1. Scatter plot showing the distribution of share counts of the Mashablearticles.

TABLE ITABLE SHOWING TOPIC NAME, MEAN SHARE COUNT, SUM OF SHARECOUNTS, NUMBER OF ARTICLES FROM TOPIC IN TOP 10 AND TOP 50

HIGHEST RECORDED SHARE COUNTS

Topic Mean Sum Top 10 Top 50Lifestyle 3,682 7,728,777 0 3

Entertainment 2,970 20,962,727 0 7Business 3,063 19,168,370 5 10

Social Media 3,629 8,431,057 0 1Technology 3,072 22,568,993 1 1

World 2,287 19,278,735 1 6

Top 10 and Top 50 highest recorded share counts of articlesin the ONP data set. Based on these observations, regressionmodels were employed to predict the share counts.

III. METHODS

A. Random Forest Approach

A random forest is a meta estimator that fits a number ofclassifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy andcontrols over-fitting[5].The parameters of the RFR model were set to use the numberof decision trees (n estimators) to be equal to 10 and qualityof measure of split (criterion) to be mean square error (mse).After training the RFR on 60% of the data (approximately23,500 articles), the coefficient of determination score R2,was found to be -0.45 on the test set. The low R2 scoreis attributed to the fact that the RFR model does not useoptimized parameters and the share count distribution ishighly skewed.In this work, the optimization of R2 on the RFR model isperformed by selecting optimal parameter values. Parameterselection optimizes the RFR model by varying the valueof two parameters - number of decision trees, and theoptimal number of features to split the tree at the nodes(max features). n estimators and max features were chosenbecause, they determine, most significantly, the error ofeach tree in the forest and the correlation among the treesrespectively. Optimizing these parameters resulted in a betterRFR model than the one obtained previously.The score from the training data set was obtained usingan out-of-bag estimate (oob score) to determine how wellthe RFR model did, when it trained on the two varying

Fig. 2. A scatter plot showing the predictions (Green) of share count on thetest set against their true share count (Blue).

parameters.

Fig. 3. Finding the optimal value of n estimators with the max features setconstant at a value of sqrt(n features data)

Max features was set to a constant value ofsqrt(n features data) while n estimators was set initially at100 and increased in steps of 50. From Figure 3 it is observedthat oob score gradually keeps decreasing until n estimatorsis equal to 400 and then starts to increase. Thus, the optimalvalue for n estimators is 400.

Fig. 4. Determining the optimal value of max features

To select an optimal value for max features, the parameterwas initially set to 2 and increased in steps of 2. Theoob score was calculated at each iteration, and the optimalvalue of max features was determined to be the 2 as can beseen in Figure 4.

Page 3: Predicting popularity of online articles using Random Forest …download.xuebalib.com/xuebalib.com.31410.pdf · Fig. 1. Scatter plot showing the distribution of share counts of the

Fig. 5. The parameter optimized RFR predicts share counts of the lesserpopular articles more accurately.

TABLE IITABLE SHOWING THE TOP FIVE REGRESSION MODELS, THEIR R2 SCORES

AND THE R2 SCORES AFTER STANDARDIZATION.

Regression Model Name R2 score on testset

StandardizedR2 score on testset

Simple Linear Regressor 0.0194630319 -2.5176840921Random Forest Regressor 0.0269157856 0.0304349384Ridge 0.0214011520 -0.0009408909Stochastic Gradient Descent -L2

0.0214011201 0.0006722123

Isotonic Regressor 0.0169791588 0.0263902333

Further, to prevent over-fitting, there is a need to set themaximum depth of the tree and the minimum number ofsamples in each newly created leaf. After optimizing then estimators & max features, and setting max depth to 100& min samples leaf to 500, the R2 score for the optimizedRFR model was found to be 0.027 as compared to thenon-optimized R2 score of -0.45.

B. Other Regression Models And Further Analysis

The impact of standardization, regularization, correlationanalysis between the features, feature selection and highbias/high variance are studied on well known regressionmodels, namely, Nearest Neighbor, Lasso, Ridge, Elastic Netand the Isotonic regression model in predicting share countson the ONP data set.

Standardization: Feature standardization scales the valueof each feature in the data set to have zero-mean and unit-variance. The general method of calculation, is, to determinethe distribution mean and standard deviation for each feature.The mean is subtracted from each feature and then the valuesof each feature is divided by its standard deviation[6].

Regularization: Regularization is used to encourage simplemodels to prevent over-fitting, which is done by adding apenalty to the loss function. This penalty is a multiple ofthe L1 or L2 norm. The Lasso (L1 norm) linear model withparameter alpha equal to 0.01 resulted in an R2 score of-0.23 while Ridge (L2 norm) linear model with alpha equal

TABLE IIITABLE SHOWING R2 TEST SCORES FOR MODELS AFTER APPLYING

VARIANCE THRESHOLD METHOD FOR FEATURE SELECTION.

Variance Threshold Method R2 Test ScoreNearest Neighbor - var thresh=0.000015,num features=11

0.0138

Poisson regression - var thresh=0.015,num features=51

-0.21

Random Forest - var thresh=0.9, num features=22 0.028Decision Trees - var thresh=0.9, num features=22 0.0267

to 0.01 resulted in an R2 score of 0.021.

Correlation Analysis: During the initial data exploration,the summary statistics of all features were determined. Cor-relation analysis on the data set was performed to identifymulticollinearity among the features. The correlation matrixprovided the correlation coefficients between all pairs offeatures.The following pairs of features were found to have highcorrelation coefficients (above 0.8):

• n non stop unique tokens and n non stop words• kw max min and k avg min• k max avg and k avg avg• self reference avg shares and

self reference max shares• LDA 2 and data channel is world

It was inferred from above that, the Ordinary Least Squaresmethod would give us unrepresentative predictor coefficientsin the linear regression model.

Learning curves: Learning curves are plots of trainingset size VS error metric. They are used to decide if agiven learning model is either under-fitting (High bias) orover-fitting the data (High Variance).To measure the extent of high bias/high variance, the meanR2 test score is used. The high bias/high variance of the RFRmodel is shown in Figure 6. From Figure 6, it is observedthat, as the number of training samples increases above 2,000,the R2 score does not vary significantly.

Feature selection: Feature selection aims to find the bestsubset of features that are relevant to a task. The variancethreshold and random forest feature selection methods wereused to identify, the change in R2 score, when a subset of thefeatures were used. Variance threshold method yielded a maxR2 score of 0.028 on the RFR model with 22 features selectedand, an R2 score of 0.025 was recorded after random forestfeature selection on the same RFR model.The Tables III and IV represent the R2 scores for a few modelsafter applying the Variance Threshold Method and RandomForest Feature Selection Method.

Page 4: Predicting popularity of online articles using Random Forest …download.xuebalib.com/xuebalib.com.31410.pdf · Fig. 1. Scatter plot showing the distribution of share counts of the

TABLE IVTABLE SHOWING R2 TEST SCORES FOR MODELS AFTER APPLYING

RANDOM FOREST FEATURE SELECTION METHOD FOR FEATURESELECTION.

Random Forest Feature Selection Method R2 Test ScoreRandom Forest 0.02567

Linear regression -0.32393Lasso -0.32379Ridge 0.02140

Nearest Neighbor 0.012947

TABLE VTABLE SHOWING THE THIRTEEN REGRESSION MODELS AND THEIR

RESPECTIVE R2 SCORES.

Regression Model Name R2 score on test SetSimple Linear Regressor 0.0194630319Decision Tree 0.0082854814Random Forest Regressor 0.027Nearest Neighbor 0.0129476227Lasso -0.2309239414Ridge 0.0214011520Elastic Net -1.2779228312Stochastic Gradient Descent - L1 -2.44E+33Stochastic Gradient Descent - L2 0.0214011201Stochastic Gradient Descent - elastic -1.94E+33Support Vector Regressor - Linear kernel -0.4790311402Adaboost Regressor 0.008Isotonic Regressor 0.0169791588

IV. RESULTS

A. Random Forest

The Random Forest Regressor was applied to the ONP dataset and it’s results are summarized. Figure 2 and Figure 5show evidence of the RFR model trying to predict, accurately,the share count of articles belonging to the high density, lowshare count region.Standardization was used to scale the feature values to lie inthe range of -1 to 1. Table II shows the test set R2 scores ofthe top five models obtained, before and after standardization.The RFR model produces the best R2 score among the fivemodels, and, its R2 score increases after standardization.The learning curve is an indicator of bias or variance withina model. The RFR model’s learning curve shown in Figure6 indicates that, at 2000 training examples, both the trainingand validation R2 scores converge to a near 0 score. Thusthe model is suffering from high bias, and, training morethan 2000 data points is not going to drastically improve themodel’s performance.

B. Random Forest Approach Compared With Other Models

Thirteen regression models were compared based on the R2scores, obtained after testing on 40% of the articles from theONP data set. 40% was chosen as, three of the top five modelsrecorded their highest R2 test scores when trained with 60%of the data set.Table V shows the performance of each of the thirteenregression models based on their R2 score. The top modelsidentified were - Random Forest regression model, Simple

TABLE VITABLE SHOWING TOPIC AND NO OF ARTICLES EXTRACTED

Topic No of articles extractedLifestyle 2,099

Entertainment 7,057Technology 7,346

Business 6,258Social Media 2,323

World 8,428

Linear regression model, Ridge regression model, L2 normbased Stochastic Gradient Descent and Isotonic regressionmodel. The RFR model performed the best among all themodels with a R2 score of 0.027.

C. Article Topic Based Analysis

The ONP data set consisted of articles in six topics -Lifestyle, Entertainment, Technology, Business, Social Mediaand World. Table VI shows the distribution of articles extractedfrom each topic.The top five best performing learning models were run on thedata extracted for each of the six topics. A 60% training setsize was used to train the models.

Performance of regression models on each of the six newdata sets showed that:

• Lifestyle articles were best learned by the Ridge regres-sion model.

• Entertainment articles were best learned by the RandomForest regression model.

• Technology articles were best learned by the RandomForest regression model.

• Business articles were best learned by the Nearest Neigh-bor model.

• Social Media articles were best learned by the NearestNeighbor model.

• World articles were best learned by the Lasso regressionmodel.

Based on the data distribution of the articles in each of the sixsubsets, it was inferred that the Ridge regression model andNearest Neighbor model could be used to fit textual featureextracted data, when there is a small number of samples towork with (2,000 samples).

D. Share Genie

Share Genie is an application to identify articles in the ONPdata set as either popular or unpopular. The bi-class classifiersetup is explained, and the results are summarized below.The summary statistics of the actual share counts were ob-tained, and are recorded in Table VII.

A satisfactory indicator of share threshold for popular arti-cles was chosen to be in the 75th percentile of the share counts.Setting the value at the 75th percentile to be the threshold forpopular articles, data points below the 75th percentile value,were labeled as 0 (unpopular) and above this as 1 (popular).A Random Forest classifier was run, to classify into 1s and 0s(9,644 articles were used for this implementation, out of which

Page 5: Predicting popularity of online articles using Random Forest …download.xuebalib.com/xuebalib.com.31410.pdf · Fig. 1. Scatter plot showing the distribution of share counts of the

TABLE VIITABLE SHOWING SUMMARY STATISTICS OF THE DATA.

Statistic SharesCount 39,643mean 3,395std 11,627min 125% 94650% 1,40075% 2,800max 8,43,300

60%, i.e, 5,786 were used for training, and the remaining totest accuracy of the Random Forest classifier).The RFR model was used to predict share counts of 40% ofthe articles (9,644 articles) from the ONP data set, and, theshare count value at the 75th percentile was chosen to be thethreshold. Share counts below the threshold value were labeledas 0s and above it were labeled 1s. A Random Forest classifierwas then run to classify the articles.Results show that, a threshold of 75 percentile on the predictedshare count distribution, that is, a share count of about 4370,can be used to classify 88.8% of the articles accurately aseither popular or unpopular. Similarly, a threshold of 75percentile on the actual share count distribution can be usedto classify 74.4% of the articles as popular or unpopular.

Fig. 6. RFR Learning Curve.

V. CONCLUSION

In this work, the Random Forest regression model wasapplied on the ONP data set. Through optimization, a moreaccurate RFR model in terms of R2 score was obtained.The optimized RFR model predicted share counts of articlesin the low share count, dense band region with a goodaccuracy. Comparison of thirteen regression models resultedin the identification of the top five models in terms of R2score. Effects of standardization, regularization, correlationand feature selection were summarized on these models. Thelearning curve of the RFR model showed the presence ofhigh-bias. Random Forest classification model applied to theONP data set predicted an article to be popular/unpopular.A threshold of 75 percentile on the predicted share counts,classified 88.8% of the articles accurately.

In the future, the optimized regression model can beapplied on tweets to predict popularity. Clustering tweetson the sentiments of their text and other advanced featurescould potentially lead to obtaining new insights into theirdistributions.

REFERENCES

[1] K. Fernandes, P. Vinagre and P. Cortez. A Proactive Intelligent De-cision Support System for Predicting the Popularity of Online News.Proceedings of the 17th EPIA 2015 - Portuguese Conference on ArtificialIntelligence, September, Coimbra, Portugal.

[2] A. Tatar, M. De Amorim, S. Fdida and P. Antoniadis, ”A survey onpredicting the popularity of web content”, Journal of Internet Servicesand Applications, vol. 5, no. 1, 2014.

[3] Dumont, Felix, Ethan Bryce Macdonald, and Andres Felipe Rincon. Pre-dicting Mashable and Reddit Popularity Using Closed-form and GradientDescent Linear Regression.

[4] archive.ics.uci.edu/ml/datasets/Online+News+Popularity[5] scikit-learn.org/stable/modules/generated/sklearn.ensemble.

RandomForestRegressor.html[6] en.wikipedia.org/wiki/Feature scaling#Standardization[7] scikit-learn.org/stable/auto examples/model selection/plot learning

curve.html[8] M. Ahmed, S. Spagna, F. Huici, and S. Niccolini. A peek into the

future: predicting the evolution of popularity in user generated content.In Proceedings of the 6th International Conference on Web Search andData Mining, pages 607616. ACM, 2013.

[9] S.-D. Kim, S.-H. Kim, and H.-G. Cho. Predicting the virtual temperatureof web-blog articles as a measurement tool for online popularity. InProceedings of the 11th International Conference on Computer andInformation Technology, pages 449454, 2011.

[10] J. G. Lee, S. Moon, and K. Salamatian. An approach to model andpredict the popularity of online contents with explanatory factors. InProceedings of the International Conference on Web Intelligence andIntelligent Agent Technology, pages 623630. IEEE Computer Society,2010.

[11] K. Lerman and T. Hogg. Using a model of social dynamics to predictpopularity of news. In Proceedings of the 19th International Conferenceon World Wide Web, pages 621630. ACM, 2010.

[12] G. Szabo and B. A. Huberman. Predicting the popularity of onlinecontent. Communications of the ACM, 53(8):8088, Aug. 2010.

[13] A. Tatar, J. Leguay, P. Antoniadis, A. Limbourg, M. D. de Amorim,and S. Fdida. Predicting the popularity of online articles based onuser comments. In Proceedings of the International Conference on WebIntelligence, Mining and Semantics, pages 67:167:8. ACM, 2011.

[14] Hensinger, Elena, Ilias Flaounas, and Nello Cristianini. ”Modellingand predicting news popularity.” Pattern Analysis and Applications 16.4(2013): 623-635.

[15] Knobloch-Westerwick, Silvia, et al. ”Impact of popularity indicationson readers’ selective exposure to online news.” Journal of broadcasting& electronic media 49.3 (2005): 296-313.

[16] Tsagkias, Manos, Wouter Weerkamp, and Maarten De Rijke. ”Newscomments: Exploring, modeling, and online prediction.” Advances inInformation Retrieval. Springer Berlin Heidelberg, 2010. 191-203.

[17] Castillo, Carlos, et al. ”Characterizing the life cycle of online newsstories using social media reactions.” Proceedings of the 17th ACMconference on Computer supported cooperative work & social computing.ACM, 2014.

[18] Tatar, Alexandru, et al. ”From popularity prediction to ranking onlinenews.” Social Network Analysis and Mining 4.1 (2014): 1-12.

Page 6: Predicting popularity of online articles using Random Forest …download.xuebalib.com/xuebalib.com.31410.pdf · Fig. 1. Scatter plot showing the distribution of share counts of the

本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。

学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,

提供一站式文献检索和下载服务”的24 小时在线不限IP

图书馆。

图书馆致力于便利、促进学习与科研,提供最强文献下载服务。

图书馆导航:

图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具