Prosecution Insights
Last updated: April 19, 2026
Application No. 18/842,957

DISPLAY ASSISTANCE APPARATUS, DISPLAY ASSISTANCE METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM

Non-Final OA §103
Filed
Aug 30, 2024
Examiner
LIU, ZHENGXI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
225 granted / 354 resolved
+1.6% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claims 1 and 10-11 are objected to because of the following informalities: antecedent basis related to “detection target(s)” could be further clarified and some clarity issues. Appropriate correction is required. The Examiner recommends the following amendments to Claim 1, and Claims 10-11 have some similar issues. [Claim 1] (Currently Amended) A display assistance apparatus comprising: at least one memory store instructions; and at least one processor configured to execute the instructions to: acquire a detection result of an image including a plurality of detection targets, and in which detection processing of the plurality of detection targets is performed; cause to display the acquired detection result of the image; and acquire information indicating an instruction to the acquired detection result, wherein set a predetermined number of [[the]] detection targets of the plurality of detection targets to be displayed incause to display position information indicating a position of [[the]] a detection target of the plurality of detection targets within the image, and a score indicating certainty of the detection target in association with the image, and, in response to acquiring switching information indicating an instruction to switch the detection target serving as the detection result display target, switch the detection result display target to another of the plurality of detection targets within the image, and [[causes]] cause to display [[the]] position information and [[the]] a score related to the another detection target after the switching. Note: regarding the suggestion “set a predetermined number of [[the]] detection targets of the plurality of detection targets . . .,” it is made in light of the specification ¶ 58: Then, the display processing unit 104 sets a predetermined number (one in the example in Figs. 7A to 7C) of the detection targets, as a detection result display target, and causes to display position information (rectangular frame 210) indicating a position of the detection target within the image 200, and a score (label 220) indicating certainty of the detection target in association with the image 200(step S103). First, an image 200 in Fig. 7A is displayed on the display apparatus 110. Here, there are more than 3 detection targets (faces) in an image according to Fig. 7A-C. The predetermined number is 1, which is fewer than the number of detection targets in the image. Note: “when” could be replaced by “in response to,” because the limitation related to “when” is unclear if it is a contingent limitation, and there are ambiguities associated with claim interpretation of contingent limitations as well. It is requested that Applicant answer the following questions regarding the limitation related to “when.” If a reference A does not teach “acquiring switching information …” at all, does the reference teach “when acquiring switching information indicating . . ., switch”? Does “when acquiring switching information indicating . . ., switch” require a step of “acquiring switching information indicating . . .”? Compact Prosecution With respect to Claim Interpretation, the Examiner has provided some notes regarding “[BRI on the record]” throughout the Office Action, so that the record is clear about the scope of the claimed invention, and the record is also clear about the basis for the Examiner’s analyses. A clear record of the claim interpretation could expedite the examination by creating the condition to allow the examination to focus on Applicant’s inventive concept and its comparison with related prior art. If there are disagreements, Applicant may present an alternative interpretation based on MPEP 2111. The Examiner will adopt Applicant’s interpretation on the record, if Applicant’s interpretation is reasonable and/or arguments are persuasive. Applicant may amend claims relying on the Examiner’s claim interpretation provided on the record. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Fukagai et al. (US 20200074690 A1) in view of Choi et al. (US 20090037477 A1) and Yumiki (US 20080260375 A1). Regarding Claim 1, Fukagai teaches A display assistance apparatus (“An image recognition apparatus includes a processor including a plurality of arithmetic units; and a memory storing a plurality of data elements, each corresponding to one of candidate regions detected in an image and indicating a location and an evaluation value of the corresponding candidate region.” Fukagai Abstract. The disclosed image processing provides assistance to displaying like fig. 5: PNG media_image1.png 402 436 media_image1.png Greyscale ) comprising: at least one memory store instructions; and at least one processor configured to execute the instructions to (“An image recognition apparatus includes a processor including a plurality of arithmetic units; and a memory storing a plurality of data elements, ….” Fukagai Abstract. “The CPU 101 is a processor configured to execute program instructions. The CPU 101 reads out at least part of programs and data stored in the HDD 103, loads them into the RAM 102, and executes the loaded programs.” Fukagai ¶ 69.): acquire a detection result of an image including a plurality of detection targets, and in which detection processing of the detection target is performed ( “An image recognition apparatus includes a processor including a plurality of arithmetic units; and a memory storing a plurality of data elements, each corresponding to one of candidate regions detected in an image and indicating a location and an evaluation value of the corresponding candidate region.” Fukagai Abstract. The detection targets are mapped to disclosed candidate regions detected, including horse, person, dog, car as shown in Fig. 5. The detection processing includes processing of object recognition/classification.); cause to display the acquired detection result of the image ( PNG media_image1.png 402 436 media_image1.png Greyscale “An image 42 of FIG. 5 is a displayed image with the recognition results 38 output from the fast R-CNN layer 37, superimposed over the input image. . . . In the example of FIG. 5, the car, dog, horse, and two people are detected correctly. The calculated score for the car is 1.000; for the dog, 0.958; for the horse, 0.999; for a person on the near side in the image 42, 0.999; and a person on the far side in the image 42, 0.988.” Fukagai ¶ 93.); and PNG media_image1.png 402 436 media_image1.png Greyscale The displayed position information is mapped to the bounding box in fig. 5, which indicates the position of the detected/recognized object. The number above each bounding box is a confidence score mapped to score indicating certainty. “. . . , and calculates a score for each object candidate region based on the feature map 34. The score may also be referred to as an evaluation value or credibility measure. The score indicates the probability of a desired object being present in the corresponding object candidate region. The higher the score, the higher the probability of the desired object being present.” Fukagai ¶ 88.) Fukagai does not explicitly disclose acquire information indicating an instruction to the detection result, wherein set a predetermined number of the detection targets, as a detection result display target, and causes to display information including the position information indicating the position of the detection target, and, when acquiring switching information indicating an instruction to switch the detection target serving as the detection result display target, switch the detection result display target to another of the detection target within the image, and causes to display the position information and the score related to the detection target after the switching. Choi teaches acquire information indicating an instruction to the detection result, wherein set a predetermined number of the detection targets, as a detection result display target, and causes to display information including the position information indicating the position of the detection target ( [BRI on the record] The Examiner requests clarification regarding the time reference for the term “predetermined.” The determination of the “predetermined number” could be before the displaying, before the acquiring, or before any operation after the determination of the “predetermined number.” [Mapping Analysis] PNG media_image2.png 352 384 media_image2.png Greyscale PNG media_image3.png 348 378 media_image3.png Greyscale “Referring to FIG. 8A, the tag information setting screen display may display the facial photos of all individuals recognized in the original photo. As shown in FIG. 8B, the tag information setting screen display may display the facial photos of individuals only selected by the user on the original photo. Here, the tag information setting screen display may include facial photos of new individuals of which tag information is not preset, thus to allow the user to set tag information for the new individuals of which the tag information is not preset.” Choi ¶ 141. The instruction is mapped to the user’s selection instruction to “display the facial photos of individuals.” If the user selected two individuals to be displayed, the user sets predetermined number =2 of detected targets/individuals, Fig. 8 shows the selected detection targets are displayed with bounding boxes that indicate detected object(s)’s position. After Fukagai and Choi are combined, the features of bounding boxes, including the labeling and confidence score information are incorporated.), and, when acquiring switching information indicating an instruction to switch the detection target serving as the detection result display target, switch the detection result display target to another of the detection target within the image, and causes to display the position information and the score related to the detection target after the switching ( “As shown in FIG. 8B, the tag information setting screen display may display the facial photos of individuals only selected by the user on the original photo. Here, the tag information setting screen display may include facial photos of new individuals of which tag information is not preset, thus to allow the user to set tag information for the new individuals of which the tag information is not preset.” Choi ¶ 141. Fig 8B shows selected bounding boxes will be displayed compared to Fig. 8A. When a user selects a different number/set of individuals, the selection is associated with the switching information. For example, the user may initially select person/object A and display according to the selection. Subsequently, the same user may unselect person/object A and select and display person/object B. When this happens, there is a switching of detection target. The detection target is mapped to recognized object with a bounding box to show that it is a target. Fukagai already teaches displaying position information (bounding box) and score (confidence score) as shown Fukagai Fig. 5.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s user input to select interested items with primary reference Fukagai. One of ordinary skill in the art would be motivated to allow a user to customize the output of a software application. Therefore, the software application would be more user-friendly. With respect to “predetermined number,” the Examiner notes and requests clarification regarding the time reference for the term “predetermined.” The determination of the “predetermined number” could be before the displaying, before the acquiring, or before any operation after the determination of the “predetermined number.” However, if “predetermined number” is determined before the claimed “acquire information indicating an instruction to the detection result . . .,” Fukagai in view of Choi does not teach the claimed “predetermined number.” For the purposes of compact prosecution, the Examiner introduces a new reference to address the issue. Yumiki teaches acquire information indicating an instruction to the detection result, wherein set a predetermined number of the detection targets, as a detection result display target ( PNG media_image4.png 530 748 media_image4.png Greyscale “In the example shown in FIG. 15, photographed image 88a001, photographed by adjusting focus and exposure upon a specific photographing object (S=001), is displayed. Next, to select another specific photographing object (S=002), the photographer presses the right arrow key following operation cross key 38 displayed in the right corner of display section 55, and thereupon the rectangular marking on the photographing object (S=001) moves right over the photographing object (S=002).” Yumiki ¶ 134. Here, the predetermined number is 1. Significantly, this mapping is consistent with the specification’s examples related to “predetermined number.” Spec. ¶ 58. After the combination of Yumiki with Fukagai in view of Choi, a user could select through interested objects one by one.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yumiki’s technique of moving focus with Fukagai in view of Choi. One of ordinary skill in the art would be motivated to simplify the interface and help a use to focus on an interested item. Further, the direction based interface in Yumiki fig. 15 could be implemented as a software interface as well. Choi states, “When the portable terminal 100 is in the video call mode or the image capturing mode, the display module 151 may display a captured and/or received image, a UI, a GUI, and the like on its screen display.” Choi ¶ 60. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s GUI with Fukagai in view of Choi and Yumiki. One of ordinary skill in the art would be motivated to enhance the flexibility of a computing device and/or to potentially lower the cost the device. After the combination of these reference Yumiki’s direction based interface could be either hardware or software based. Regarding Claim 2, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 1, wherein the position information is a rectangle (boxes in Fukagai Fig. 5; Choi figs. 8A-B) surrounding the detection target (identified object(s), person(s), and/or face(s) in Fukagai Fig. 5; Choi figs. 8A-B) in the image, and the at least one processor (Fukagai ¶ 69; figs. 2-3) is further configured to execute the instructions to cause the display processing unit causes to display the score outside of the rectangle (Fukagai Fig. 5 PNG media_image5.png 216 148 media_image5.png Greyscale ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s user input to select interested items with primary reference Fukagai. One of ordinary skill in the art would be motivated to allow a user to customize the output of a software application. Therefore, the software application would be more user-friendly. Regarding Claim 3, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 1, wherein, the at least one processor (Fukagai ¶ 69; figs. 2-3) is further configured to execute the instructions to when acquiring area specification information indicating an instruction to specify an area being a part within the image (boxes in Fukagai Fig. 5; Choi figs. 8A-B, which specifies area for detected object; the area specification information corresponds the box-drawing instructions for the boxes in the images), and including a plurality of detection targets (detected objects in Fukagai Fig. 5; Choi figs. 8A-B,), crop the specified area from the image and causes to display the area ( PNG media_image6.png 518 378 media_image6.png Greyscale , showing that areas of Mr. Park and Ms. Lee were cropped and displayed.); and cause to display the position information (box) and the score (confidence score) regarding the predetermined number (See Claim 1 regarding the mapping for “predetermined number”) of the detection targets included in the area (Fukagai Fig. 5 Fukagai Fig. 5 PNG media_image5.png 216 148 media_image5.png Greyscale ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s user input to select interested items and cropping selected item with primary reference Fukagai. One of ordinary skill in the art would be motivated to (a) allow a user to customize the output of a software application, and (b) allow the user to view the selected image better by enlarging them and/or allow the user to use the image of an identified object for other integrate purpose/function. Therefore, the software application would be more user-friendly. Regarding Claim 4, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 1, wherein the at least one processor (Fukagai ¶ 69; figs. 2-3) is further configured to execute the instructions to acquire, as switching information on the detection target, an input from an operator (“As shown in FIG. 8B, the tag information setting screen display may display the facial photos of individuals only selected by the user on the original photo. Here, the tag information setting screen display may include facial photos of new individuals of which tag information is not preset, thus to allow the user to set tag information for the new individuals of which the tag information is not preset.” Choi ¶ 141. “In the example shown in FIG. 15, photographed image 88a001, photographed by adjusting focus and exposure upon a specific photographing object (S=001), is displayed. Next, to select another specific photographing object (S=002), the photographer presses the right arrow key following operation cross key 38 displayed in the right corner of display section 55, and thereupon the rectangular marking on the photographing object (S=001) moves right over the photographing object (S=002).” Yumiki ¶ 134.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s user input to select interested items with primary reference Fukagai. One of ordinary skill in the art would be motivated to allow a user to customize the output of a software application. Therefore, the software application would be more user-friendly. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yumiki’s technique of moving focus in an image with Fukagai in view of Choi. One of ordinary skill in the art would be motivated to simplify the interface and help a use to focus on an interested item. Regarding Claim 5, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 1. Fukagai in view of Choi does not explicitly disclose wherein the switching information includes direction information indicating a switching direction of the detection target serving as the detection result display target, and the at least one processor is further configured to execute the instructions to cause to display the position information and the score by setting the detection target located in the direction indicated by the input direction information, as the next detection result display target from among the detection target being the current detection result display target. Yumiki teaches wherein the switching information includes direction information indicating a switching direction of the detection target serving as the detection result display target ( PNG media_image4.png 530 748 media_image4.png Greyscale “In the example shown in FIG. 15, photographed image 88a001, photographed by adjusting focus and exposure upon a specific photographing object (S=001), is displayed. Next, to select another specific photographing object (S=002), the photographer presses the right arrow key following operation cross key 38 displayed in the right corner of display section 55, and thereupon the rectangular marking on the photographing object (S=001) moves right over the photographing object (S=002).” Yumiki ¶ 134. The direction information is mapped to the direction specified by the arrow interface item. The process helps a user to select and/or highlight an target item. ), and the at least one processor (Fukagai ¶ 69; figs. 2-3) is further configured to execute the instructions to cause to display the position information (boxes in Fukagai Fig. 5) and the score (confidence scores in Fukagai Fig. 5) by setting the detection target located in the direction indicated by the input direction information (Yumiki fig. 15 PNG media_image7.png 112 218 media_image7.png Greyscale showing a moving box that specifies the next target in the direction of the arrow as explained in Yumiki ¶ 134), as the next detection result display target from among the detection target being the current detection result display target ( Yumiki fig. 15. “In the example shown in FIG. 15, photographed image 88a001, photographed by adjusting focus and exposure upon a specific photographing object (S=001), is displayed. Next, to select another specific photographing object (S=002), the photographer presses the right arrow key following operation cross key 38 displayed in the right corner of display section 55, and thereupon the rectangular marking on the photographing object (S=001) moves right over the photographing object (S=002).” Yumiki ¶ 134.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yumiki’s technique of moving focus in an image with Fukagai in view of Choi. One of ordinary skill in the art would be motivated to simplify the interface and help a use to focus on an interested item. Regarding Claim 6, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 1, wherein the at least one processor (Fukagai ¶ 69; Figs. 2-3) is further configured to execute the instructions to cause to display a detection result of a plurality of the detection targets in a list ( “FIGS. 9A and 9B are exemplary views showing another method for setting tag information about facial photos of recognized individuals in the portable terminal according to the present invention.” Choi ¶ 142. “Referring to FIG. 9A, the user may select certain individuals on the tag information setting screen display so as to change the tag information for the selected individuals or to input new tag information.” Choi ¶ 143. PNG media_image8.png 468 484 media_image8.png Greyscale , where the left screens shows the list of detected objects.); acquire selection information indicating the detection target selected from the list display (Choi Figs. 9A-B showing checkboxes to acquire selection information); and cause to display a search result of the detection target indicated by the selection information in association with the image ( “Referring to FIG. 9A, the user may select certain individuals on the tag information setting screen display so as to change the tag information for the selected individuals or to input new tag information. The tag information may include name information, group information, contact information (e.g., a telephone number, an e-mail address, a mailing address, home-page address, and the like) as well as event information (e.g., a birthday, a graduation ceremony, an anniversary, and the like.).” Choi ¶ 143. The search result may be mapped to the tag information found to be changed, wherein the tag information corresponds to the selected known individual.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s technique of acquire and update tag information with Fukagai in view of Choi and Yumiki. One of ordinary skill in the art would be motivated to update tag information associated with an identified object, e.g., an individual. Therefore, the information would be current and accurate. Regarding Claim 7, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 6, wherein the at least one processor (Fukagai ¶ 69; Fig. 3) is further configured to execute the instructions to cause to display the detection result in a batch manner for each attribute of the detection result ( [BRI on the record] With respect to “batch manner,” the Examiner is reading the limitation to mean: in set(s) or group(s). [Mapping Analysis] Fukagai: PNG media_image1.png 402 436 media_image1.png Greyscale , where the attributes of confidence scores and identities of identified objects are displayed in batch, together as groups or sets.). Regarding Claim 8, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 1, wherein the score is a score which is generated by a learning model ( Fukagai Fig. 4. “Using the feature map 34, the RPN layer 35 detects, from the input image 32, the object candidate regions 36 which are image regions likely to contain objects to be detected. The RPN layer 35 sets, as object candidate regions, a large number of rectangular regions of different sizes at different locations on the input image 32, and calculates a score for each object candidate region based on the feature map 34. The score may also be referred to as an evaluation value or credibility measure. The score indicates the probability of a desired object being present in the corresponding object candidate region. The higher the score, the higher the probability of the desired object being present. The RPN layer 35 first extracts more than 6000 object candidate regions and selects the top 6000 object candidate regions in descending order of the scores. Then, the RPN layer 35 removes object candidate regions with high degree of overlapping to finally output 300 object candidate regions.” Fukagai ¶ 88. “The fast R-CNN layer 37 determines, based on the feature map 34 and the object candidate regions 36, the class of an object captured in each object candidate region and calculates a score indicating the credibility measure of the determination result. The fast R-CNN layer 37 selects a small number of image regions with sufficiently high scores. The recognition results 38 output from the fast R-CNN layer 37 include, for each selected image region, the location of the image region, the determined object class, and the score indicating the probability that the object is the particular class.” Fukagai ¶ 89.). Regarding Claim 9, Fukagai in view of Choi and Yumiki teaches The display assistance apparatus according to claim 1, wherein the at least one processor (Fukagai ¶ 69; Fig. 3) is further configured to execute the instructions to acquire information indicating an instruction to select the detection target ( Choi: PNG media_image8.png 468 484 media_image8.png Greyscale Choi: PNG media_image9.png 390 468 media_image9.png Greyscale , showing checkboxes to select an identified object, e.g., an individual.); cause a storage unit to store a detection result of the selected detection target ( Choi Figs. 9A-B. “Referring to FIG. 9A, the user may select certain individuals on the tag information setting screen display so as to change the tag information for the selected individuals or to input new tag information. The tag information may include name information, group information, contact information (e.g., a telephone number, an e-mail address, a mailing address, home-page address, and the like) as well as event information (e.g., a birthday, a graduation ceremony, an anniversary, and the like.).” Choi ¶ 143. “Referring to FIG. 9B, when the tag information is stored, a file name of a facial photo can be changed by using the tag information. For instance, if the facial photo is automatically to be stored, the controller 180 may automatically set a file name using numbers or alphabetical letters according to a stored order. In this embodiment, the file name can be set by including name information of the individual who is to be stored in the tag information. The file name can also be set by using all types of information included in the tag information, such as group information, data information, as well as name information.” Choi ¶ 144.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Choi’s saving stored information with Fukagai in view of Choi and Yumiki. One of ordinary skill in the art would be motivated to allow users to reuse the data later. “Referring to FIG. 9B, when the tag information is stored, a file name of a facial photo can be changed by using the tag information. For instance, if the facial photo is automatically to be stored, the controller 180 may automatically set a file name using numbers or alphabetical letters according to a stored order. In this embodiment, the file name can be set by including name information of the individual who is to be stored in the tag information. The file name can also be set by using all types of information included in the tag information, such as group information, data information, as well as name information.” Choi ¶ 144. Claims (10, 12-15) and (11, 16-19) are substantially similar to Claims 1-5. The rejection analyses based on Fukagai in view of Choi and Yumiki for Claims 1-5 are applied to Claims (10, 12-15) and (11, 16-19). In addition, Claim 10 recites “A display assistance method comprising, by one or more computers: . . .” (Fukagai ¶ 69; Fig. 2-3). In addition, Claim 11 recites “A non-transitory computer-readable medium storing a program for causing a computer to execute: . . .” (Fukagai ¶ 69; Fig. 2-3). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rizoiu et al. (US 20090225060 A1), which related to “predetermined number.” [0030] “Selection of the element may comprise moving a displayed pointing device over the element or highlighting the element, followed by entering one or more confirmation inputs (e.g., a click of a pointing device, a depression of a button or arrow on a housing of the user interface, the tapping once on a return or enter key, a contacting of a user's finger or stylus on a part of a touchscreen, etc.), selecting on the display a number or other item or icon corresponding to the element, selecting a ‘next’ or ‘previous’ icon, providing a voice activated command, etc.” Ozog (US 10027726 B1), which also has bounding box and object recognition: PNG media_image10.png 280 360 media_image10.png Greyscale Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZHENGXI LIU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 30, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602865
METHODS FOR DEPTH CONFLICT MITIGATION IN A THREE-DIMENSIONAL ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12599463
COLOR MANAGEMENT PROCESS FOR CUSTOMIZED DENTAL RESTORATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597402
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM FOR APPLICATION WINDOW HAVING FIRST DISPLAY MODE AND SECOND DISPLAY MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12567193
PARTICLE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561929
METHOD AND ELECTRONIC DEVICE FOR PROVIDING INFORMATION RELATED TO PLACING OBJECT IN SPACE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+40.1%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month