We will write a custom Essay on Neural Networks Used in Social Media Industry specifically for you
301 certified writers online
Neural networks are defined as a new form of technologies inspired by biological neural networks that rely on observational data and use various examples as tools to learn. They are used more and more actively in the analytics of big data in social media. Neural networks are highly efficient in analyzing large volumes of raw data and are used for image and speech recognition among other tasks. Despite the fact that neural networks can be useful in data analytics, they can have a negative impact on users of social media due to the fact that they can be used for data mining that targets the private information of users and their preferences. The first part of the paper will examine key challenges that the use of neural networks has created in the social media industry. The second part of the paper will be dedicated to the possible strategies that can be utilized to overcome the identified challenges.
Deep learning, a strategy of analyzing, extracting, and compiling data that neural networks use is possible due to the availability of big data, large volumes of raw data that are used for analysis by neural networks. Frequently, neural networks are used to analyze data of social media users’, their preferences, opinions, and choices. Najafabadi et al. (2015) point out that deep learning is more efficient when neural networks need to determine and detect non-local and global data relationships and/or patterns. In the context of social media, deep learning can be used to analyze the preferences of users and suggest suitable advertisements to them. However, Iyyer, Enns, Boyd-Graber, and Resnik (2014) used neural networks in their experiments to identify the ideological preferences of users using “bag-of-words” characteristics for a specific ideological movement together with other markers that detected subjective language. While it is fascinating to think about how neural networks help arrange data, such an analysis of political preferences can be used in the future by politicians to target their potential electorate and encourage more users to vote for them with the help of analyses conducted by neural networks. Thus, in this case, neural networks could potentially undermine the ethical correctness of presidential campaigns.
Another problem with neural networks is the way they are used. As Zhou, Chawla, Jin, and Williams (2014) point out, the use of centralized clouds where data is stored and which are used later for the analysis performed by neural networks is a problem because this data, despite being private, serve the owners of the cloud and not users. Neural networks perform analysis of private data to meet the interests of the owners (companies and businesses), whereas the role of users remains small and includes only the provision of data. User privacy is more important than the interests of organizations, but so far the storage of data remains centralized, which allows organizations to use deep learning and neural networks to gather metrics that can enhance their businesses. Interestingly enough, these metrics can also affect the design of social networks as well. For example, users’ preferences of suggested products at such websites as Amazon or eBay are analyzed with the help of neural networks, and, depending on their preferences (how many items should be shown, what items were purchased more often than others, etc.) the company decides to customize the website to increase purchases (Zhou et al., 2014). The problem here is not the customization itself but the fact that users’ data (purchases, preferences, likes, etc.) are collected without their informed consent and are used to benefit the company, not users.
Strategies to Face Challenges
One of the strategies that could be used to face challenges is the promotion of users’ privacy rights. Not all users are aware that companies need to collect their consent in order to conduct an analysis performed by a neural network that includes their data. As neural networks often retrieve data from cloud storage, Zhou et al. (2014) suggest replacing cloud storage with clouds that users themselves can control (such as OwnCloud, for example).
It would be incorrect to assume that neural networks as such should be prohibited from data analysis to save privacy. Juma (2016) points out that people can view technologies as forms of destructive creation that do not have any benefits to society. However, despite the identified disadvantages of neural networks and challenges that they create, they should not be perceived as technologies that can be used for the benefit of third-parties only. Shokri & Shmatikov (2015) suggest using various strategies that can help preserve privacy during deep learning performed by neural networks. First, it is suggested that participants who operate the neural network use the model tested during training locally and privately, thus preventing leakage while the neural network is used in data collection.
Second, differential privacy can be taken into consideration as well. It is a method when the overall output of the analysis does not depend on the inclusion or exclusion of specific data (e.g., users’ age or liked posts or any other data that could be considered private). Additionally, the parameters of sensitivity can also be customized depending on privacy concerns and other factors. With the help of this sensitivity, neural networks will be able to ignore or exclude data that is considered confidential or sensitive (e.g., users’ country of origin, place of employment, etc.). Many social media ask users to link their phone numbers to their pages, thus raising privacy concerns, especially during an analysis performed by a neural network that can utilize even highly sensitive data such as credit card numbers or information in private messages. To avoid this, the customization of sensitivity is necessary.
As can be seen, the regulation of neural networks in social media is not yet sufficient enough to guarantee the privacy of users’ data. Companies and businesses use neural networks to collect and analyze data for their benefit, while users raise concerns about their privacy and protection of sensitive data. Neural networks are used to examine the political and ideological preferences of users, which can undermine their political decisions if social media start targeting their profiles with ideology-based advertisements. Furthermore, neural networks use the data stored in centralized clouds, which are used by owners (companies) depending on their interests and are not always regulated. To prevent data leaks and ensure that privacy in social media is not violated during deep learning by neural networks, legal regulations are necessary. Differential privacy, a specific model of neural networks that allows omitting some of the data without any adverse influence on outcomes, is also a suitable method of overcoming the challenge. Customization of sensitivity during deep learning is also an option, as it can also help neural networks exclude some types of data (e.g., political preferences, country of origin, phone number, etc.). Additionally, cloud storage can be replaced by private clouds, where users will be able to store data they have concerns about or regard as too sensitive.
Iyyer, M., Enns, P., Boyd-Graber, J., & Resnik, P. (2014). Political ideology detection using recursive neural networks. In Proceedings of the 52nd annual meeting of the association for computational linguistics (pp. 1113-1122). Stroudsburg, PA: ACL.
Juma, C. (2016). Innovation and its enemies: Why people resist new technologies. Oxford, England: Oxford University Press.
Najafabadi, M. M., Villanustre, F., Khoshgoftaar, T. M., Seliya, N., Wald, R., & Muharemagic, E. (2015). Deep learning applications and challenges in big data analytics. Journal of Big Data, 2(1), 1-21.
Shokri, R., & Shmatikov, V. (2015). Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security (pp. 1310-1321). New York, NY: ACM.
Zhou, Z. H., Chawla, N. V., Jin, Y., & Williams, G. J. (2014). Big data opportunities and challenges: Discussions from data analytics perspectives [discussion forum]. IEEE Computational Intelligence Magazine, 9(4), 62-74.