每日随记-1

    技术2024-07-15  68

    某些小可爱说:你写了我哪有空看呀,还必须要我看的样子,哼,我就偏不看,凭什么让你来命令我呀,我自己的东西都看不完呢,才不要看你这东西。再者说了,万一你爸妈知道了还得怨我浪费你时间。 也罢也罢,那就写在这里吧,小可爱想看就看,不想看就不看吧~

    对抗机器学习

    Adversarial examples are malicious inputs designed to fool machine learning models.

    哈哈终于知道为啥子我收到的垃圾短信和垃圾邮件都要用火星文了哈哈哈哈

    Notice that adversial problems cannot simply be solved by learners that account for concept drift: while these learners allow the data-generating process to change over time, the do not allow this change to be a function of the classifier itself.Most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution. Unfortunately, most of the large databases available for mining today violate this assumption. U C = ∑ ( x , y ) ∈ X Y P ( x , y ) [ U C ( C ( A ( x ) ) , y ) − ∑ X i ∈ X C ( x ) V i ] U_{\mathcal{C}}=\sum_{(x, y) \in \mathcal{X} \mathcal{Y}} P(x, y)\left[U_{C}(\mathcal{C}(\mathcal{A}(x)), y)-\sum_{X_{i} \in \mathcal{X}_{C}(x)} V_{i}\right] UC=(x,y)XYP(x,y)UC(C(A(x)),y)XiXC(x)Vi U A = ∑ ( x , y ) ∈ X Y P ( x , y ) [ U A ( C ( A ( x ) ) , y ) − W ( x , A ( x ) ) ] U_{\mathcal{A}}=\sum_{(x, y) \in \mathcal{X} \mathcal{Y}} P(x, y)\left[U_{A}(\mathcal{C}(\mathcal{A}(x)), y)-W(x, \mathcal{A}(x))\right] UA=(x,y)XYP(x,y)[UA(C(A(x)),y)W(x,A(x))] U C = ( 1 / ∣ T ∣ ) ∑ ( x , y ) ∈ T [ U C ( C ( A ( x ) ) , y ) − ∑ X i ∈ X C ( x ) V i ] U_{\mathcal{C}}=(1 /|\mathcal{T}|) \sum_{(x, y) \in \mathcal{T}}\left[U_{C}(\mathcal{C}(\mathcal{A}(x)), y)-\sum_{X_{i} \in \mathcal{X}_{\mathcal{C}}(x)} V_{i}\right] UC=(1/T)(x,y)TUC(C(A(x)),y)XiXC(x)ViGiven the two players, the actions available to each, and the payoffs from each combination of actions, classical game theory is conccerned with finding a combination of strategies such that neither player can gain by unilaterally changing its strategy. This combination is known as a Nash equilibrium. In our case, the actions are classifiers C \mathcal{C} C and feature change strategies A \mathcal{A} A, and the payoffs are U C U_\mathcal{C} UC and U A U_\mathcal{A} UA. As the following theorem shows, some realizations of the adversarial classification game always have a Nash equilibrium.

    THEOREM Consider a classification game with a binary cost model for ADVERSARY, i.e., given a pair of instances x x x and x ′ , x^{\prime}, x, ADVERSARY can either change x x x to x ′ x^{\prime} x (incurring a unit cost) or it cannot (the cost is infinite). This game always has a Nash equilibrium, which can be found in time polynomial in the number of instances.

    Processed: 0.009, SQL: 9