Prosecution Insights
Last updated: April 19, 2026
Application No. 18/545,349

ELECTRONIC DEVICE, METHOD, AND COMPUTER-READABLE STORAGE MEDIA FOR IDENTIFYING VISUAL OBJECT CORRESPONDING TO CODE INFORMATION USING A PLURALITY OF CAMERAS

Non-Final OA §102§103
Filed
Dec 19, 2023
Examiner
ABDOU TCHOUSSOU, BOUBACAR
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
294 granted / 436 resolved
+9.4% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 436 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 18 is objected to because of the following informalities: “obtain third image frames by magnifying the first image frames obtained using the first camera that is maintained as the camera for the recognition of the QR code is maintained as the first camera, while displaying the preview image” should be “obtain third image frames by magnifying the first image frames obtained using the first camera that is maintained as the camera for the recognition of the QR code Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 4-7, 10, and 13-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ota (US 20210042485). As to claim 1, Ota discloses an electronic device (FIGS. 1-2), comprising: a plurality of cameras including a first camera and a second camera facing in a same direction as the first camera (FIG. 1B, cameras 114a, 114b, 114c); a display (FIG. 1A, display 105); a processor (FIG. 2, CPU 101); and memory for storing instructions that, when executed by the processor, cause the electronic device to (see [0018]): display, on the display, a preview image, based on first image frames obtained using the first camera from among the plurality of cameras (FIG. 4A, S401; see [0041], In S402, the CPU 101 displays, on the display 105, an LV image captured by the standard camera 114b among the three rear cameras 114 driven in S401), and based at least in part on determining that a portion of the first image frames includes an object to be recognized (FIG. 4A, S403; see [0043], the CPU 101 determines whether an optical code image, e.g., a subject appearing to be a two-dimensional code, is included in an image captured by the standard camera 114b): while displaying the preview image based on the first image frames obtained using the first camera from among the plurality of cameras, obtain second image frames using the second camera (FIG. 4A, S404-S405; see [0043]-[0044], the CPU 101 determines whether the two-dimensional code is readable by the standard camera image processing unit 104b from the image captured by the standard camera 114b … If the two-dimensional code is unreadable (NO in S404), the processing proceeds to S405 … using the telecamera image processing unit 104a, the CPU 101 determines whether a subject appearing to be a two-dimensional code is included in an image captured by the telecamera 114), and execute a recognition function based on the second image frames obtained using the second camera (FIG. 4A, S406; see [0045], he CPU 101 determines whether the two-dimensional code is readable by the telecamera image processing unit 104a from the image captured by the telecamera 114a, e.g., determines whether a distribution pattern of the cells 602 of a subject appearing to be a two-dimensional code is detectable). As to claim 4, Ota further discloses wherein the instructions, when executed by the processor, cause the electronic device to: when obtaining the first image frames including the object using the first camera (FIG. 5A, YES at S502), based on a position of the object maintained in the at least portion of the second image frames (FIG. 5A, YES at S506), execute the recognition function based on the second image frames (FIG. 5A, YES at S507; see [0071]), and when obtaining the first image frames that do not include the object using the first camera (FIG. 5A, No at S502), at least temporarily cease obtaining the second image frames using the second camera (FIG. 5, S504-S505: telecamera does not capture images until a predetermined time elapsed). As to claim 5, Ota further discloses wherein the first camera corresponds to a wide-angle camera and the second camera corresponds to a telephoto camera having a narrower field of view than the wide-angle camera (see [0016], The rear camera 114 includes a telecamera 114a, a standard camera 114b, and a super-wide angle camera 114c; see [0025], if standard camera 114b is selected, an image with an angle wider than that of an image captured by the telecamera 114a), wherein the recognition function is executed to recognize a quick response (QR) code (see FIG. 6 and [0042], symbol 601 is a QR code) based on the second image frames obtained using the telephoto camera while the wide-angle camera is used to obtain the first image frames for display the preview image (FIGS. 3A-3B), wherein the instructions, when executed by the processor, cause the electronic device to: based on execution of the recognition function of the object corresponding to a QR code, display a visual object in relation to a portion of the preview image corresponding to the QR code (FIGS. 3A-3B). As to claim 6, Ota further discloses wherein the instructions, when executed by the processor, cause the electronic device to display, with the visual object, an executable object for executing a function corresponding to the object, and wherein the visual object is displayed along a periphery of the object located in the portion of the preview image (FIG. 3D and [0052]). As to claim 7, Ota further discloses wherein the recognition function is executed to recognize a text based on the second image frames obtained using the second camera while the first camera is used to obtain the first image frames for display the preview image (see [0052]), wherein the instructions, when executed by the processor, cause the electronic device to: based on execution of the recognition function of the object corresponding to a text, display a visual object with size corresponding to size of the object, in order to mask the object viewed in the preview image (FIG. 3D). As to claims 10 and 13-15, method claims 10 and 13-15 recite the same features as those recited in claims 1 and 4-6, respectively, and are therefore rejected for the same reasons as above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-3, 11-12 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ota (US 20210042485) in view of Ono (US 20030020814). As to claim 2, Ota fails to explicitly disclose comprising: a sensor facing in a same direction as the first camera and the second camera; wherein the instructions, when executed by the processor, cause the electronic device to: obtain, using the sensor, information regarding a distance from the object viewed in the preview image being displayed on the display, determine, based at least in part on the distance, changing the first camera set as a camera for the recognition function of the object to the second camera, and based on the determination, obtain the second image frames using the second camera. However, Ono teaches a sensor facing in a same direction as the first camera and the second camera (FIG. 2, distance sensor 52); wherein the instructions, when executed by the processor, cause the electronic device to: obtain, using the sensor, information regarding a distance from the object viewed in the preview image being displayed on the display (see [0033]), determine, based at least in part on the distance, changing the first camera set as a camera for the recognition function of the object to the second camera, and based on the determination, obtain the second image frames using the second camera (see [0085]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Ota using Ono’s teachings to include a sensor facing in a same direction as the first camera and the second camera; wherein the instructions, when executed by the processor, cause the electronic device to: obtain, using the sensor, information regarding a distance from the object viewed in the preview image being displayed on the display, determine, based at least in part on the distance, changing the first camera set as a camera for the recognition function of the object to the second camera, and based on the determination, obtain the second image frames using the second camera in order to increase the applications of the camera in which the image capturing can be performed and the amount of information that can be captured (Ono; [0086]). As to claim 3, the combination of Ota and Ono further discloses wherein the first camera corresponds to a telephoto camera supporting a first magnification and the second camera corresponds to a telephoto camera supporting a second magnification higher than the first magnification (Ono; see [0085], in a case where the focal length regions of the two zoom lenses 220a and 220b are set in such a manner that the regions are partially overlapped, the focal length of the first capturing optical system 21a is set shorter and that of the second capturing optical system 21b is set longer), and wherein the instructions, when executed by the processor, cause the electronic device to: select, by using reference data regarding relation between candidate distances and magnifications available through the plurality of cameras, the second magnification related to a candidate distance corresponding to the distance, and change the first camera to the second camera supporting the second magnification (Ono; see [0085]). As to claims 11-12, method claims 11-12 recite the same features as those recited in claims 2-3, respectively, and are therefore rejected for the same reasons as above. As to claim 16, Ota discloses an electronic device (FIGS. 1-2), comprising: a plurality of cameras including a first camera and a second camera facing in a same direction as the first camera (FIG. 1B, cameras 114a, 114b, 114c); a display (FIG. 1A, display 105); a processor (FIG. 2, CPU 101); and memory for storing instructions that, when executed by the processor, cause the electronic device to (see [0018]): display, on the display, a preview image, based on first image frames obtained using the first camera from among the plurality of cameras (FIG. 4A, S401; see [0041], In S402, the CPU 101 displays, on the display 105, an LV image captured by the standard camera 114b among the three rear cameras 114 driven in S401), based at least in part on the distance (see [0036], [0047], [0048]): while maintaining displaying the preview image based on first image frames obtained using the first camera from among the plurality of cameras, obtain second image frames using the second camera (FIG. 4A, S404-S405; see [0043]-[0044], the CPU 101 determines whether the two-dimensional code is readable by the standard camera image processing unit 104b from the image captured by the standard camera 114b … If the two-dimensional code is unreadable (NO in S404), the processing proceeds to S405 … using the telecamera image processing unit 104a, the CPU 101 determines whether a subject appearing to be a two-dimensional code is included in an image captured by the telecamera 114); and execute a recognition of quick response (QR) code based on the second image frames while displaying the preview image on the display (FIG. 4A, S406; see [0045], he CPU 101 determines whether the two-dimensional code is readable by the telecamera image processing unit 104a from the image captured by the telecamera 114a, e.g., determines whether a distribution pattern of the cells 602 of a subject appearing to be a two-dimensional code is detectable; see FIG. 6 and [0042], symbol 601 is a QR code). Ota fails to explicitly disclose a sensor facing in a same direction as the first camera and the second camera; and obtain, by using the sensor, information regarding a distance from an external object viewed in the preview image being displayed on the display. However, Ono teaches a sensor facing in a same direction as the first camera and the second camera (FIG. 2, distance sensor 52); and obtain, by using the sensor, information regarding a distance from an external object viewed in the preview image being displayed on the display (see [0033]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Ota using Ono’s teachings to include a sensor facing in a same direction as the first camera and the second camera; and obtain, by using the sensor, information regarding a distance from an external object viewed in the preview image being displayed on the display in order to increase the applications of the camera in which the image capturing can be performed and the amount of information that can be captured (Ono; [0086]). As to claim 17, the combination of Ota and Ono further discloses wherein the instructions, when executed by the processor, cause the electronic device to: determine, based the at least in part on the distance, changing a camera for a recognition of the QR code from the first camera to the second camera, and based on the determination, obtain the second image frames using the second camera (Ono; see [0085]). As to claim 18, the combination of Ota and Ono further discloses wherein the instructions, when executed by the processor, cause the electronic device to: based at least in part on determining, based on the distance, the camera for the recognition of the QR code being maintained as the first camera (Ota; FIG. 4A, YES at S403-S404; Ono; see [0085]): obtain third image frames by magnifying the first image frames obtained using the first camera that is maintained as the camera for the recognition of the QR code is maintained as the first camera, while displaying the preview image (Ota; see [0042]; Ono; see [0085]), and execute the recognition of the QR code based on at least portion of the third image frames (Ota; FIG. 4A, S404 and S413; Ono; see [0085]). As to claim 19, the combination of Ota and Ono further discloses wherein the instructions, when executed by the processor, cause the electronic device to: in response to success of the recognition of the QR code, display, a visual object with the preview image being maintained on the display (Ono; FIGS. 3A-3B). As to claim 20, the combination of Ota and Ono further discloses wherein the visual object is displayed along a periphery of the recognized QR code position on the external object viewed in the preview image (Ota; FIG. 3D and [0052]). Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ota (US 20210042485) in view of Cho et al (US 20180063344). As to claim 8, Ota fails to explicitly disclose wherein the instructions, when executed by the processor, cause the electronic device to: receive an input for obtaining the preview image, while displaying the preview image based on the first image frames obtained using the first camera, based on the input, obtain at least portion of the first image frames corresponding to the preview image and at least portion of the second image frames in conjunction with the at least portion of the first image frames, and store the at least portion of the second image frames with metadata related to the at least portion of the first image frames. However, Cho teaches receive an input for obtaining the preview image, while displaying the preview image based on the first image frames obtained using the first camera (FIG. 4B and [0074]), based on the input, obtain at least portion of the first image frames corresponding to the preview image and at least portion of the second image frames in conjunction with the at least portion of the first image frames (FIG. 4B and [0074]), and store the at least portion of the second image frames with metadata related to the at least portion of the first image frames (FIG. 4B and [0074]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Ota using Cho’s teachings to receive an input for obtaining the preview image, while displaying the preview image based on the first image frames obtained using the first camera, based on the input, obtain at least portion of the first image frames corresponding to the preview image and at least portion of the second image frames in conjunction with the at least portion of the first image frames, and store the at least portion of the second image frames with metadata related to the at least portion of the first image frames in order to easily store an image and pattern code through a pattern code trigger and an image capturing trigger, which are displayed in a camera application (Cho; [0007]). As to claim 9, the combination of Ota and Cho further discloses wherein the instructions, when executed by the processor, cause the electronic device to: display the at least one of the first image frames (Cho; FIG. 5 (a)), receive another input for the object included in the at least one of the first image frames (Cho; FIG. 5 (a)-(b)), and display the at least one of the second image frames including the object having size larger than size of the object included in the at least one of the first image frame, based on the metadata related to the at least one of the first image frames (Cho; FIG. 5 (b)-(c)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOUBACAR ABDOU TCHOUSSOU whose telephone number is (571)272-7625. The examiner can normally be reached M-F 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 5712727331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BOUBACAR ABDOU TCHOUSSOU/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Feb 25, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604072
CAMERA AND INFRARED SENSOR SHUTTER
2y 5m to grant Granted Apr 14, 2026
Patent 12587755
VEHICLE-MOUNTED CONTROL DEVICE, AND THREE-DIMENSIONAL INFORMATION ACQUISITION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587724
DIGITALLY ENHANCED MICROSCOPY FOR MULTIPLEXED HISTOLOGY
2y 5m to grant Granted Mar 24, 2026
Patent 12574509
METHOD AND APPARATUS FOR ENCODING/DECODING VIDEO AND METHOD FOR TRANSMITTING BITSTREAM
2y 5m to grant Granted Mar 10, 2026
Patent 12574476
VEHICULAR VISION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
82%
With Interview (+14.2%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 436 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month