Prosecution Insights
Last updated: April 19, 2026
Application No. 19/069,937

Dynamic Sorting and Inference Using Gesture Based Machine Learning

Non-Final OA §103§DP
Filed
Mar 04, 2025
Examiner
WU, TONY
Art Unit
2166
Tech Center
2100 — Computer Architecture & Software
Assignee
Match Group Americas LLC
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
79%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
108 granted / 209 resolved
-3.3% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
20 currently pending
Career history
229
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
68.6%
+28.6% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 209 resolved cases

Office Action

§103 §DP
Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21, 28, 35 are rejected on the ground of nonstatutory double patenting as being unpatentable over independent claims 1 and 9 of Patent 12346534 (reference application). Claims 21, 28, 35 are rejected on the ground of nonstatutory double patenting as being unpatentable over independent claims 16, 24, 29 of Patent 9547369 (reference application). Claims 21, 28, 35 are rejected on the ground of nonstatutory double patenting as being unpatentable over independent claims 1 and 14 of Patent 9720570 (reference application). 9720570Although the claims at issue are not identical, they are not patentably distinct from each other because of the below. Claim 21 of instant application Claim 1 of patent 12346534 A method of interpreting user gestures on a user interface of a computing device, comprising: displaying a stack of images sequentially with a first image showing a first object belonging to a first category; while displaying the first image, detecting a user gesture either swiping the first image in a first direction to select the first object or swiping the first image in a second direction to reject the first object; and using the user gesture as a bias to dynamically change a set of subsequent images in the stack of images, comprising: in response to determining that the swiping is in the first direction, changing the set of subsequent images to a first set of images that show objects belonging to the first category; and in response to determining that the swiping is in the second direction, changing the set of subsequent images to a second set of images that show objects belonging to a second category different from the first category. A method for performing a backup, the method comprising: providing a user interface for display on a screen of a mobile computing device that presents a top image of a first user in a series of images indicating an option to select the first user, wherein the series of images is presented as a stack of images receiving, from a user, input indicating a user engagement with the top image and a swiping of the top image in at least one of a first direction and a second direction, wherein swiping the top image in the first direction indicates selecting the first user and swiping the top image in the second direction indicates rejecting the first user, wherein the first and second directions are towards different sides of the screen; determining a first attribute associated with a first profile of the first user, the first attribute indicating an activity option preferred by the first user, the activity option is determined based at least on the user engagement with the top image and the swiping of the top image; determining that the swiping occurred in the first direction; in response to determining that the swiping occurred in the first direction, selecting the first user; in response to selecting the first user, placing the top image of the first user in a first collection of images; displaying a subsequent image of a second user in the series of images; receiving, from the user, a second input indicating a user engagement with the subsequent image and a swiping of the subsequent image in at least one of the first direction and the second direction; determining that the swiping occurred in the second direction; in response to determining that the swiping occurred in the second direction, rejecting the second user; in response to rejecting the second user, placing the subsequent image of the second user in a second collection of images, the second collection of images comprising images of users that were rejected; determining that the second attribute corresponds to the first attribute; in response to determining that the second attribute corresponds to the first attribute, determining an image of the new user to be displayed on the screen based in part on selecting the first user, wherein selecting the first user represents a preference of the user. Claim 21 of instant application Claim 16 of patent 9547369 A method of interpreting user gestures on a user interface of a computing device, comprising: displaying a stack of images sequentially with a first image showing a first object belonging to a first category; while displaying the first image, detecting a user gesture either swiping the first image in a first direction to select the first object or swiping the first image in a second direction to reject the first object; and using the user gesture as a bias to dynamically change a set of subsequent images in the stack of images, comprising: in response to determining that the swiping is in the first direction, changing the set of subsequent images to a first set of images that show objects belonging to the first category; and in response to determining that the swiping is in the second direction, changing the set of subsequent images to a second set of images that show objects belonging to a second category different from the first category. A method for performing a backup, the method comprising: providing a user interface for display on a screen that sequentially, not simultaneously presents a series of images representing options to accept or reject; wherein the user interface implements acceptance or rejection of an option with swiping a respective image in a first or second direction and off the screen, wherein the first and second directions are towards different sides of the screen, and wherein one of the first and second directions includes a swipe right; receiving input representing a user gesture of engagement with the respective image in the user interface and a swipe of the respective image in the first or second direction and off the screen; and based on the first or second direction in the received input, accepting or rejecting the option represented by the respective image. Claim 21 of instant application Claim 1 of patent 9720570 A method of interpreting user gestures on a user interface of a computing device, comprising: displaying a stack of images sequentially with a first image showing a first object belonging to a first category; while displaying the first image, detecting a user gesture either swiping the first image in a first direction to select the first object or swiping the first image in a second direction to reject the first object; and using the user gesture as a bias to dynamically change a set of subsequent images in the stack of images, comprising: in response to determining that the swiping is in the first direction, changing the set of subsequent images to a first set of images that show objects belonging to the first category; and in response to determining that the swiping is in the second direction, changing the set of subsequent images to a second set of images that show objects belonging to a second category different from the first category. A method of interpreting user gestures received from a gesture based user interface of a computing device, including: providing a user interface for display on a screen that presents a stack of images indicating options to accept or reject, wherein the user interface implements acceptance or rejection of an option with an indication of a swiping a respective image in a first or second direction and off the screen, wherein the first and second directions are towards different sides of the screen; repeatedly receiving input indicating a user engagement with the respective image in the user interface and a swipe of the respective image in the first or second direction and off the screen and based on the first or second direction in the received input, accepting or rejecting the option indicated by the respective image; and using the options repeatedly accepted and rejected, determining options to represent in subsequent images. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-23, 25-26, 28-30, 32-33, 35-37, 39-40 are rejected under 35 U.S.C. 103 as being unpatentable over Nakagawa (U.S Pub # 20110157047) in view of Bonilla (U.S Pub # 20130173368) and in further view of Simkhai (U.S Pat # 8606297). With regards to claim 21, Nakagawa discloses a method of interpreting user gestures on a user interface of a computing device, comprising: displaying a stack of images sequentially with a first image showing a first object belonging to a first category (Fig. 2A-2C [0039] images are arranged in order of image shooting date. In the vertical direction, images taken on the same date are arranged in such an order that images having a later image shooting time are arranged towards the back of the screen virtual space compared to those with an earlier image shooting time); while displaying the first image, detecting a user gesture either swiping the first image in a first direction to select the first object or swiping the first image in a second direction to reject the first object ([0097] an image can be deleted by flicking the image upwards. The image may be protected by flicking the image downwards). Nakagawa does not disclose however Bonilla discloses: using the user gesture as a bias to dynamically change a set of subsequent images in the stack of images, comprising (Bonilla [0062] interest level slider bar is used by the end user in order to make selections or designations about potential dating candidates. [0043] Indicated interest can be used in additional processing for the future coordination of potential matches. Hence, a reevaluation protocol is facilitated by continuing to leverage results from the interest bar). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa by Bonilla to determine potential matches based on user gestures and feedback. One of ordinary skill in the art would have been motivated to make this modification in order so end users may use a slider bat to make selections of designations about potential dating candidates (Bonilla [0008]). Simkhai discloses: in response to determining that the swiping is in the first direction, changing the set of subsequent images to a first set of images that show objects belonging to the first category ([Col. 3 lines 46-56] updating, in response to user input, the user interface to display only those of the representations associated with ones of the other users having one or more interests consistent with interests of the user); and in response to determining that the swiping is in the second direction, changing the set of subsequent images to a second set of images that show objects belonging to a second category different from the first category ([Col. 3 lines 46-56] updating, in response to user input, the user interface to display only those of the representations associated with ones of the other users having one or more interests consistent with interests of the user). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa and Bonilla by Simkhai to update a series of images based on user input. One of ordinary skill in the art would have been motivated to make this modification in order to update in response to user input, the user interface to display representation of other users (Simkhai [Col. 4 lines 1-10]). Claims 28 and 35 correspond to claim 21 and are rejected accordingly. With regards to claim 22, Nakagawa does not disclose however Simkhai discloses: iteratively using each user gesture on each image of the stack of images as an additional bias to dynamically change a subsequent image in the stack to another image that shows a second object that belongs to a third category to which a plurality of selected objects in preceding images within the stack belong ([Col. 3 lines 46-56] updating, in response to user input, the user interface to display only those of the representations associated with ones of the other users having one or more interests consistent with interests of the user). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa and Bonilla by Simkhai to update a series of images based on user input. One of ordinary skill in the art would have been motivated to make this modification in order to update in response to user input, the user interface to display representation of other users (Simkhai [Col. 4 lines 1-10]). Claims 29 and 36 correspond to claim 22 and are rejected accordingly. With regards to claim 23, Nakagawa does not disclose however Simkhai discloses: in response to determining that the swiping is in the first direction, sorting the set of subsequent images within the stack of images such that images belonging to the first category to be displayed before other images ([Col. 3 lines 46-56] updating, in response to user input, the user interface to display only those of the representations associated with ones of the other users having one or more interests consistent with interests of the user). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa and Bonilla by Simkhai to update a series of images based on user input. One of ordinary skill in the art would have been motivated to make this modification in order to update in response to user input, the user interface to display representation of other users (Simkhai [Col. 4 lines 1-10]). Claims 30 and 37 correspond to claim 23 and are rejected accordingly. With regards to claim 25, Nakagawa does not disclose however Simkhai discloses: wherein the first image is selected to be displayed on the user interface based at least in part upon a set of historical choices of a user on a previous stack of images, the set of historical choices comprising prior selections or rejections, the set of historical choices further comprising an inferred preference to objects belonging to the first category ([Col. 3 lines 46-56] updating, in response to user input, the user interface to display only those of the representations associated with ones of the other users having one or more interests consistent with interests of the user). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa and Bonilla by Simkhai to update a series of images based on user input. One of ordinary skill in the art would have been motivated to make this modification in order to update in response to user input, the user interface to display representation of other users (Simkhai [Col. 4 lines 1-10]). Claims 32 and 39 correspond to claim 25 and are rejected accordingly. With regards to claim 26, Nakagawa does not disclose however Bonilla discloses: Nakagawa does not disclose however Bonilla discloses: wherein the first category is associated with a preference of a user with respect to objects ([0062] user interest in potential matches). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa by Bonilla to determine potential matches based on user gestures and feedback. One of ordinary skill in the art would have been motivated to make this modification in order so end users may use a slider bat to make selections of designations about potential dating candidates (Bonilla [0008]). Claims 33 and 40 correspond to claim 25 and are rejected accordingly. Claims 24, 31, 38 are rejected under 35 U.S.C. 103 as being unpatentable over Nakagawa (U.S Pub # 20110157047) in view of Bonilla (U.S Pub # 20130173368) and in further view of Simkhai (U.S Pat # 8606297) and Haveliwala (U.S Pub # 20100293057). With regards to claim 24, Nakagawa does not disclose however Simkhai discloses: displaying a second image of the stack of images, the second image showing a second object belonging to the first category ([Col. 3 lines 46-56] updating, in response to user input, the user interface to display only those of the representations associated with ones of the other users having one or more interests consistent with interests of the user); and while displaying the second image of the stack of images: detecting a second user gesture on the second image, the second user gesture indicating a rejection of the second object ([Col. 8 lines 61-63] user may indicate some subjects to be blocked). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa and Bonilla by Simkhai to update a series of images based on user input. One of ordinary skill in the art would have been motivated to make this modification in order to update in response to user input, the user interface to display representation of other users (Simkhai [Col. 4 lines 1-10]). Haveliwala discloses: detecting an inference of feedback on the second image, the feedback comprising time spent interacting with the second image ([0066] user viewed for less than a predefined threshold of time); and in response to detecting the second user gesture on the second image and detecting the inference of the feedback on the second image, dynamically changing a classification of the second object by removing the second object from a class of objects that belong to the first category ([0066] For example, information 209 may be used to select only documents that received significant user activity (in accordance with predefined criteria) for generating the user profile, or information 209 may be used to exclude from the profiling process documents that the user viewed for less than a predefined threshold amount of time). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa, Bonilla and Simkhai by Haveliwala to infer user intentions based on time spent deciding on an image. One of ordinary skill in the art would have been motivated to make this modification in order to determine a user’s interests (Haveliwala [0006]). Claims 31 and 38 correspond to claim 24 and are rejected accordingly. Claims 27 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Nakagawa (U.S Pub # 20110157047) in view of Bonilla (U.S Pub # 20130173368) and in further view of Simkhai (U.S Pat # 8606297) and Hua (U.S Pub # 20110191334). With regards to claim 27, Nakagawa does not disclose however Hua discloses: before displaying a first subsequent image, determining a first similarity score between the first object and a second object shown in the first subsequent image, the first similarity score being based on at least a first attribute in common between the first object and the second object; before displaying a second subsequent image, determining a second similarity score between the first object and a third object shown in the second subsequent image, the second similarity score being based on at least a second attribute in common between the first object and the third object; determining that the second similarity score is higher than the first similarity score; and in response to determining that the second similarity score is higher than the first similarity score, modifying an order of the set of subsequent images, such that the second object is displayed before the third object ([0031] rank the digital images to be displayed. At this stage, the calculated similarity scores corresponding to the digital images may be used to sort the digital images from most similar to least similar). It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified Nakagawa, Bonilla and Simkhai by Hua to rank images by their similarity scores. One of ordinary skill in the art would have been motivated to make this modification in order to retrieve and rank digital images, and display the retrieved and ranked digital images (Hua [0005]). Claim 34 corresponds to claim 27 and is rejected accordingly. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TONY WU whose telephone number is (571)272-2033. The examiner can normally be reached Monday-Friday (9-5). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached at (571) 272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TONY WU/ Primary Examiner, Art Unit 2166
Read full office action

Prosecution Timeline

Mar 04, 2025
Application Filed
Jan 09, 2026
Non-Final Rejection — §103, §DP
Mar 12, 2026
Applicant Interview (Telephonic)
Mar 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585545
DYNAMIC ADJUSTMENTS OF BACKUP POLICIES
2y 5m to grant Granted Mar 24, 2026
Patent 12566674
SPLITTING IMAGE BACKUPS INTO MULTIPLE BACKUP COPIES
2y 5m to grant Granted Mar 03, 2026
Patent 12566766
Using Artificial Intelligence for Tagging Key Ingredients to Provide Recipe Recommendations
2y 5m to grant Granted Mar 03, 2026
Patent 12561382
AUTOMATICALLY RESTRUCTURING SEARCH CAMPAIGNS
2y 5m to grant Granted Feb 24, 2026
Patent 12541430
GENERATING FILE-BLOCK CHANGE INFORMATION FOR A BACKUP
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
79%
With Interview (+27.2%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 209 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month