Prosecution Insights
Last updated: April 19, 2026
Application No. 18/419,241

Interactive Forums for Specific Regions of a Digital Image

Final Rejection §103
Filed
Jan 22, 2024
Examiner
HOANG, PHI
Art Unit
2619
Tech Center
2600 — Communications
Assignee
EBAY INC.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
756 granted / 928 resolved
+19.5% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
25 currently pending
Career history
953
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
53.0%
+13.0% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 928 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see pages 9-11, filed 15 January 2026, with respect to the rejection(s) of claim(s) 1 and similar claims in substance under 35 U.S.C. 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Law et al. (US 9,715,701 B2) and Bakhshmand et al. (US 2024/0160194 A1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim(s) 1-5, 7-9, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Biran et al. (US 2021/0286939 A1) in view of Law et al. (US 9,715,701 B2) and further in view of Bakhshmand et al. (US 2024/0160194 A1). Regarding claim 1, Biran discloses a method comprising: receiving, by a client device, data associated with a digital image of an item from a server (Paragraphs 0017 and 0019, communication of content and associated annotations and comments between client devices and a content server for sharing), the data indicating a tagged region of the digital image and a plurality of comments associated with the tagged region, the plurality of comments submitted by a plurality of users; (Paragraphs 0022-0024, the content server stores annotations for regions in an image having objects that has associated comments) displaying, by the client device, the digital image of the item; (Figure 5A, display of image of objects) detecting, by the client device, input selecting the tagged region of the digital image; (Paragraph 0030, selection of annotations in the image using a menu) and displaying, in response to detecting the input selecting the tagged region, a user interface that includes an aggregated view of the plurality of comments wherein displaying the user interface includes overlaying at least part of the user interface on at least part of the digital image outside the tagged region (Figures 5B-5D, display of an interface having comments associated where the interface is displayed below the annotation displayed in the image). Biran does not clearly disclose where the item is listed for sale on a listing platform. Law discloses a virtual internet-based yard sale allowing a user to sell items with a photograph of items (Column 2, lines 22-29) where the user can tag items in the photograph with information (Figure 2 and column 3, lines 1-34). Law’s technique of tagging items in a photograph for sale on a virtual internet-based yard sale with information would have been recognized by one of ordinary skill in the art to be applicable to the system for allowing multiple users to making and sharing comments for tagged regions in an image of Biran and the results would have been predictable in the tagging of items in a photograph for sale on a virtual internet-based yard sale with information from multiple users. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Biran in view of Law does not clearly disclose wherein the tagged region of the digital image corresponds to a portion of the item that has a defect or blemish. Bakhsmand discloses detecting and display of defects of an object for input by a user of whether or not it is an anomalous defect or an anomaly (Paragraphs 0009-0012). Bakhsmand’s technique of detecting and display of images of defects of an object for input by a user of whether or not it is an anomalous defect or an anomaly would have been recognized by one of ordinary skill in the art to be applicable to the tagging of items in a photograph with information by multiple users of Biran in view of Law and the results would have been predictable in the tagging of items in a displayed image that with defect information by multiple users. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 2, Biran discloses wherein displaying the user interface includes positioning the user interface adjacent to the tagged region (Figure 5B, display of comments below the annotation). Regarding claim 3, Biran discloses displaying, by the client device, a tag to visually indicate that the digital image includes the tagged region (Figure 5B, element 522, annotation is displayed on the image). Regarding claim 4, Biran discloses wherein detecting the input selecting the tagged region includes detecting input selecting the tag (Paragraph 0030, selection of an annotation from the menu). Regarding claim 5, Biran discloses overlaying, by the client device, a border on the digital image, the border extending around the tagged region (Figure 5B, element 522, a box representing the annotation). Regarding claim 7, Biran discloses detecting, by the client device, input at an input element of the user interface; and updating, by the client device, the user interface to indicate a user interaction with the input element by a user of the client device (Figures 3C-3D, and paragraph 0025, text input to add a comment to a comment chain). Regarding claim 8, Biran discloses wherein the user interaction includes a comment submitted by a user of the client device, wherein updating the user interface includes aggregating the comment with the plurality of comments to update the aggregated view of the plurality of comments in the user interface (Figures 3C-3D, and paragraph 0025, text input to add a comment to a comment chain). Regarding claim 9, Biran discloses transmitting, by the client device, a report indicating the user interaction to the server (Paragraph 0024, text can be sent to the content server). Regarding claim 15, Biran discloses wherein the tagged region is a first tagged region, wherein the user interface is a first user interface, wherein the at least part of the digital image is a first part of the digital image, the method further comprising: detecting, by the client device, input selecting a region of the digital image different than the first tagged region; (Figure 5C and paragraph 0030, selections made at a menu displayed with the image and annotation) communicating, by the client device, a report to the server, the report identifying the selected region as a second tagged region of the digital image; (Paragraphs 0030-0031, the selection of different annotations results in different annotations being received from the content server, paragraph 0019) and displaying, by the client device, a second user interface for displaying comments associated with the second tagged region, wherein displaying the second user interface includes overlaying at least part of the second user interface on a second part of the digital image outside the first tagged region, the second tagged region, and the first part of the digital image (Figure 5D and paragraphs 0030-0031, display of a second annotation for a different region of the image with associated comments over the image below the second annotation). Regarding claim 16, Biran discloses wherein the data received from the server includes an indication of a plurality of tagged regions of the digital image, the method further comprising: filtering, by the client device, the plurality of tagged regions to select the tagged region based on the tagged region being tagged by a user of the client device, tagged by an administrator of the digital image, or authorized by the administrator for display to the user of the client device (Paragraph 0030, selection of the annotation where annotations can be created by a user, paragraph 0022). Regarding claims 17 and 20, similar reasoning as discussed in claim 1 is applied. Furthermore, with regard to claim 12, Biran discloses at least one processor; and a computer-readable storage medium storing instructions that are executable by the at least one processor (Paragraph 0041, computer-readable medium containing computer program code, which can be executed by a computer processor). Regarding claim 18, Biran discloses wherein the client device is a first client device, wherein the user interface is a first user interface, wherein the tagged region is a first tagged region, the method further comprising: receiving, by the server, a report from a second client device, the report indicating selection of a region of the digital image as a second tagged region of the digital image; (Paragraph 0019, content and annotations can be shared between multiple client devices with the content server where users can select regions in the image and provide annotations and comments, paragraph 0022) and causing, by the server, the first client device to display a second user interface for displaying comments associated with the second tagged region by transmitting data associated with the second tagged region to the first client device in response to receiving the report from the second client device (Paragraphs 0030-0031, selection of different regions and displaying the annotation and comments associated with the selected region). Regarding claim 19, Biran discloses receiving, by the server, at least one report from at least one of a plurality of client devices, the at least one report indicating a plurality of tagged regions in the digital image; (Paragraph 0022, users can add annotations to regions in the image) and filtering, by the server, the plurality of tagged regions to select the tagged region for the data transmitted to the client device based on the tagged region being tagged by a user of the client device, tagged by an administrator of the digital image, or authorized by the administrator for display to the user of the client device (Paragraph 0030, selection of the annotation where annotations can be created by a user, paragraph 0022). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Biran et al. (US 2021/0286939 A1) in view of Law et al. (US 9,715,701 B2) in view of Bakhshmand et al. (US 2024/0160194 A1) and further in view of Rathod (US 2018/0351895 A1). Regarding claim 6, Biran in view of Law and further in view of Bakhshmand discloses all limitations as discussed in claim 1. Biran in view of Law and further in view of Bakhshmand does not clearly disclose overlaying, by the client device, text associated with the tagged region on the digital image at a location adjacent to the tagged region and outside the at least part of the digital image overlaid by the user interface, wherein the data received from the server includes an indication of the text associated with the tagged region. Rathod discloses overlaying text that can be positioned anywhere on an image that can be shared with other users (Figure 5 and paragraph 0113). Rathod’s technique of overlaying text that can be positioned anywhere on an image that can be shared with other users would have been recognized by one of ordinary skill in the art to be applicable to the sharing and display of images with annotations and associated comments between clients and a content server of Biran in view of Law and further in view of Bakhshmand and the results would have been predictable in the overlaying of text positioned anywhere on images with annotations that can be shared between client devices and a content server. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Claim(s) 10-12 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Biran et al. (US 2021/0286939 A1) in view of Law et al. (US 9,715,701 B2) in view of Bakhshmand et al. (US 2024/0160194 A1) and further in view of in view of Harrup et al. (US 11,216,933 B2). Regarding claim 10, Biran in view of Law and further in view of Bakhshmand discloses all limitations as discussed in claim 1. Biran in view of Law and further in view of Bakhshmand does not clearly disclose transmitting, by the client device, a request to authenticate the item of the digital image to the server; and receiving, by the client device, a prediction of whether the item in the digital image is authentic from the server in response to transmitting the request. Harrup discloses authentication of an object by obtaining an image of an item for examination and providing the results of the examination (Figure 5 and column 17, line 56 – column 18, line 40 and column 22, line 53 – column 23, line 4). Harrup’s technique of authentication of an object by obtaining an image of an item for examination and providing the results of the examination would have been recognized by one of ordinary skill in the art to be applicable to the image of items for sale that can be annotated with associated comments of Biran in view of Law and further in view of Bakhshmand and the results would have been predictable in the authentication of items for sale in an image that can be annotated with associated comments. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 11, Harrup discloses receiving, by the client device, input requesting authentication of the item in the digital image, wherein transmitting the request is in response to receiving the input requesting the authentication of the item (Column 17, line 56 – column 18, line 40, authentication begins by obtain the image of the item). Regarding claim 12, Harrup discloses receiving the input requesting the authentication includes receiving input selecting a region of the digital image, and wherein transmitting the request includes transmitting at least one of the digital image, an indication of the selected region of the digital image, or an indication of the item (Column 17, line 56 – column 18, line 40, identification of features in the image that can be provided to the server). Regarding claim 14, Harrup discloses updating the user interface to indicate the prediction (Column 22, line 53 – column 23, line 4, displaying the results of the examination). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Biran et al. (US 2021/0286939 A1) in view of Law et al. (US 9,715,701 B2) in view of Bakhshmand et al. (US 2024/0160194 A1) in view of Harrup et al. (US 11,216,933 B2) and further in view of Shae et al. (US 2021/0397897 A1). Regarding claim 13, Biran in view of Law in view of Bakhshmand and further in view of Harrup discloses all limitations as discussed in claim 10. Biran in view of Law in view of Bakhshmand and further in in view of Harrup does not clearly disclose wherein the prediction is based on a machine learning model trained using images of one or more items similar to the item of the digital image. Shae discloses authenticating a product using a deep learning model (Paragraph 0058) that has been trained on images of the product (Paragraphs 0039-0044). Shae’s technique of authenticating a product using a deep learning model would have been recognized by one of ordinary skill in the art to be applicable to the authentication of an item using images of Biran in view of Law in view of Bakhshmand and further in view of Harrup and the results would have been predictable in the authentication of an item using a deep learning model using images of the item as input. Therefore, the claimed subject matter would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mehrotra et al. (US 2022/0378362 A1) discloses tagging specific portions of an image. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHI HOANG whose telephone number is (571)270-3417. The examiner can normally be reached Mon-Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON CHAN can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHI HOANG/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Jan 22, 2024
Application Filed
Oct 30, 2025
Non-Final Rejection — §103
Jan 09, 2026
Examiner Interview Summary
Jan 09, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Response Filed
Feb 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602889
METHOD AND SYSTEM OF RENDERING A 3D IMAGE FOR AUTOMATED FACIAL MORPHING
2y 5m to grant Granted Apr 14, 2026
Patent 12592010
NEURAL NETWORK-BASED IMAGE LIGHTING
2y 5m to grant Granted Mar 31, 2026
Patent 12579624
DISPLAY DEVICE AND OPERATING DRIVING THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12561885
METHOD, SYSTEM, AND MEDIUM FOR ARTIFICIAL INTELLIGENCE-BASED COMPLETION OF A 3D IMAGE DURING ELECTRONIC COMMUNICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12561866
CONTENT-SPECIFIC-PRESET EDITS FOR DIGITAL IMAGES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+17.0%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 928 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month