Prosecution Insights
Last updated: April 19, 2026
Application No. 18/849,497

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§102§103
Filed
Sep 21, 2024
Examiner
AGGARWAL, YOGESH K
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
998 granted / 1113 resolved
+27.7% vs TC avg
Moderate +7% lift
Without
With
+6.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
1145
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
36.4%
-3.6% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1113 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is directed to a program claimed in the absence of any underlying medium or other system, but a program is not a method, machine, manufacture, or composition of matter. The claim thus falls outside the four statutory categories of 35 U.S.C. 101 and is therefore nonstatutory. If the specification includes written description support, this rejection could be overcome by claiming the invention as being stored in a nontransitory computer readable recording medium; however, see MPEP 2111.05 for a discussion of functional and nonfunctional descriptive material as related to computer readable media. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 6-13, 15 and 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Iijima et al. (US PGPUB 20100277620). [Claim 1] An information processing device configured to simultaneously display a plurality of frames corresponding to a plurality of zoom magnifications, respectively, in a camera on a display on which a through image is displayed (Paragraph 72, The view angle candidate frame generation unit 121a generates the view angle candidate frames corresponding to the set candidate values. The view angle candidate frame display unit 122 superimposes the view angle candidate frames generated by the view angle candidate frame generation unit 121a on the input image so as to generate the output image. An example of the output image generated in this way is illustrated in the middle part of FIG. 4. An output image PA2 illustrated in the middle part of FIG. 4 is obtained by superimposing a view angle candidate frame FA1 corresponding to the candidate value of .times.4, a view angle candidate frame FA2 corresponding to the candidate value of .times.8, and a view angle candidate frame FA3 corresponding to the candidate value (upper limit value) of .times.12 on the input image under the current zoom magnification of .times.1.). [Claim 2] The information processing device according to claim 1, wherein the through image zoomed at a zoom magnification corresponding to a frame selected by a user out of the plurality of frames is displayed on the display (Paragraph 77, if the user determines one of the view angle candidate frames (YES in STEP 4), the zoom in operation is performed so that the image having the angle of view of the determined view angle candidate frame is obtained (STEP 5), and the operation is finished. In other words, the zoom magnification is changed to the candidate value corresponding to the determined view angle candidate frame, and the operation is finished. If the view angle candidate frame FA3 is determined in the output image PA2 illustrated in the middle part of FIG. 4, for example, an output image PA3 illustrated in the lower part of FIG. 4 having substantially the same angle of view as the view angle candidate frame FA3 is obtained by the zoom in operation). [Claim 3] The information processing device according to claim 1, wherein by a selection operation of a frame by a user, imaging by the camera is performed at a zoom magnification corresponding to the frame selected by the selection operation out of the plurality of frames (Paragraph 146, Therefore, as illustrated in a second drawing of FIG. 8, when a user touch input is sensed between the region of the original image 800 added by the second guide line 820 and the region of the original image 800 added by the first guide line 830, the controller 180 can sense the touch input 850 as a touch input for capturing the original image 800 at a zoom magnification (i.e., four times zoom magnification) based on the second guide line 820. Therefore, when a user input (for example, an input of the photographing key) for capturing an image is sensed, the controller 180 can store an image, zoomed in at four times magnification, in the memory 170). [Claim 4] The information processing device according to claim 1, configured to determine a zoom magnification corresponding to a frame on a basis of a current zoom magnification of the through image (Paragraph 71, In this case, candidate values of the changed zoom magnification are set. As a method of setting the candidate values of the zoom magnification, for example, values obtained by dividing equally between the currently set zoom magnification and the upper limit value of the zoom magnification, and the upper limit value may be set as the candidate values. Specifically, for example, when it is supposed that the currently set zoom magnification is .times.1, the upper limit value is .times.12, and values obtained by dividing equally into three are set as candidate values, .times.12, .times.8, and .times.4 are set as the candidate values). [Claim 6] The information processing device according to claim 1, configured to display a frame on a basis of a subject detected in the through image (Paragraph 86, As illustrated in FIG. 5, a display image processing unit 12b of this example includes a view angle candidate frame generation unit 121b which generates the view angle candidate frames based on the zoom information and the object information, and outputs the same as view angle candidate frame information, and a view angle candidate frame display unit 122). [Claim 7] The information processing device according to claim 6, configured to determine a display position of the frame on a basis of a position of the detected subject (Paragraph 92, In this example, the view angle candidate frame generation unit 121b generates the view angle candidate frames so as to include the object in the input image (STEP 2b). Specifically, if the object is a human face, the view angle candidate frames are generated as a region including the face, a region including the face and the body, and a region including the face and the peripheral region. In this case, it is possible to determine the zoom magnifications corresponding to the individual view angle candidate frames from sizes of the view angle candidate frames and the current zoom magnification). [Claim 8] The information processing device according to claim 6, configured to determine a zoom magnification corresponding to the frame on a basis of a size of the detected subject (Paragraph 87, The object information includes, for example, information about a position and a size of a human face in the input image detected from the input image, and information about a position and a size of a human face that is recognized to be a specific face in the input image and Paragraph 91, Thus, the view angle candidate frame generation unit 121b recognizes not only the currently set zoom magnification and the upper limit value but also a position and a size of the object in the input image). [Claim 9] The information processing device according to claim 6, configured to display the frame corresponding to each of a plurality of the detected subjects (Paragraph 122, view angle candidate frames FB511 to FB513 are generated based on a plurality of objects D51 and D52 as illustrated in FIG. 12. For instance, view angle candidate frames FB511 to FB513 are generated based on barycentric positions of the plurality of objects D51 and D52. Specifically, for example, the view angle candidate frames FB511 to FB513 are generated so that barycentric positions of the plurality of objects D51 and D52 substantially match center positions of the view angle candidate frames FB511 to FB513). [Claim 10] The information processing device according to claim 6, configured to display the frame corresponding to the subject selected by a user out of a plurality of the detected subjects (Paragraph 141, fig. 14, Specifically, for example, when the user designates a position of an object D71 in an output image PB70 for which the view angle candidate frames are not generated, view angle candidate frames FB711 to FB713 are generated based on the object D71 as in an output image PB71). [Claim 11] The information processing device according to claim 1, configured to determine a size of the frame on a basis of a current zoom magnification of the through image (Paragraph 73, Therefore, based on the current zoom magnification and the candidate values, positions and sizes of the view angle candidate frames FA1 to FA3 can be set). [Claim 12] The information processing device according to claim 1, configured to emphasize a frame recommended to a user out of the plurality of frames (Paragraph 75, fig. 4, For instance, the temporarily determined view angle candidate frame may be emphasized by displaying the entire perimeter of the angle of view indicated by the relevant view angle candidate frame with a thick line or a solid line while other view angle candidate frames that are not being temporarily determined may not be emphasized by displaying the entire perimeter of the angle of view indicated by the relevant view angle candidate frame with a thin line or a broken line). [Claim 13] The information processing device according to claim 12, configured to emphasize, in a case where the plurality of frames is displayed corresponding to any one of a plurality of subjects detected in the through image, the frame not including the subject that does not correspond to the plurality of frames out of the plurality of frames (Paragraph 75, fig. 4, For instance, in the case where the operating unit 17 has a configuration including a zoom key (or cursor key) and an enter button, the user operates the zoom key so as to change a temporarily determined view angle candidate frame in turn, and presses the enter button so as to determine the temporarily determined view angle candidate frame. When the decision is performed in this way, it is preferred that the view angle candidate frame generation unit 121a display the view angle candidate frame FA3 that is temporarily determined by the zoom key in a different shape from others as illustrated in the middle part of FIG. 4 as the output image PA2, so that the temporarily determined view angle candidate frame FA3 may be discriminated. For instance, the temporarily determined view angle candidate frame may be emphasized by displaying the entire perimeter of the angle of view indicated by the relevant view angle candidate frame with a thick line or a solid line while other view angle candidate frames that are not being temporarily determined may not be emphasized by displaying the entire perimeter of the angle of view indicated by the relevant view angle candidate frame with a thin line or a broken line). [Claim 15] The information processing device according to claim 12, configured to emphasize the frame corresponding to a zoom magnification executable by optical zooming (Paragraph 75, fig. 4). [Claim 17] The information processing device according to claim 1, configured to display a button to which a user inputs for selecting a frame on the display (Paragraph 87, Note that, the object information is not limited to information about the human face, and may include information about a position and a size of a specific color part or a specific object (e.g., an animal), which is designated by the user via the operating unit 17 (a touch panel or the like) in the input image in which the designated object or the like is detected and Paragraph 75, fig. 4, Note that, in the case where the operating unit 17 is constituted of a touch panel or other unit that can specify any position, the view angle candidate frame that is closest to the position specified by the user may be determined or temporarily determined). [Claim 18] The information processing device according to claim 1, wherein a frame to which a user performs an input at any position in the frame out of the plurality of frames is made a frame selected by the user (Paragraph 75, fig. 4, For instance, in the case where the operating unit 17 has a configuration including a zoom key (or cursor key) and an enter button, the user operates the zoom key so as to change a temporarily determined view angle candidate frame in turn, and presses the enter button so as to determine the temporarily determined view angle candidate frame. When the decision is performed in this way, it is preferred that the view angle candidate frame generation unit 121a display the view angle candidate frame FA3 that is temporarily determined by the zoom key in a different shape from others as illustrated in the middle part of FIG. 4 as the output image PA2, so that the temporarily determined view angle candidate frame FA3 may be discriminated. For instance, the temporarily determined view angle candidate frame may be emphasized by displaying the entire perimeter of the angle of view indicated by the relevant view angle candidate frame with a thick line or a solid line while other view angle candidate frames that are not being temporarily determined may not be emphasized by displaying the entire perimeter of the angle of view indicated by the relevant view angle candidate frame with a thin line or a broken line). [Claims 19 and 20] These are method and program claims corresponding to apparatus claim 1 and are analyzed and rejected based upon apparatus claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Iijima et al. (US PGPUB 20100277620) in view of Kajimura (JP Patent # 2015005891). [Claim 5] Iijima fails to teach configured to determine, in a case where the camera can use a plurality of lenses, a zoom magnification corresponding to a frame on a basis of a type of a lens currently used by the camera. However Kajimura teaches In the example shown in FIG. 3A, at the time of aiming, an image of the A lens having the shortest focal length and a display frame of each field angle frame are displayed on the screen of the display unit 14. Accordingly, the user can visually grasp how a framed image is acquired with a lens having another focal length. For example, at the time of aiming, the user can arbitrarily select a lens other than the D lens having the longest focal length and give an instruction to display the captured image (Paragraph 17). Therefore taking the combined teachings of Iijima and Kajimura, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have configured to determine, in a case where the camera can use a plurality of lenses, a zoom magnification corresponding to a frame on a basis of a type of a lens currently used by the camera in order for the user can visually grasp how a framed image is acquired with a lens having another focal length. Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Iijima et al. (US PGPUB 20100277620) in view of Watanabe et al. (US PGPUB 20120113307). [Claim 14] Iijima fails to teach emphasizing, in a case where a speed of a subject detected in the through image is a threshold or higher, a larger frame out of the plurality of frames. However Watanabe teaches in FIG. 41A, FIG. 41B, and FIG. 41C show examples of a display image when the effect is changed in accordance with a movement speed of such an object of interest. For example, when a movement speed of an object of interest OI is lower than that in FIG. 41A, an area of a portion surrounded by an ellipse as an effect EF is reduced as shown in FIG. 41B. Alternatively, for example, when a movement speed of the object of interest OI is lower than that in FIG. 41A, a pattern of the ellipse as the effect EF is changed to vary an area of the effect, e.g., thin the line as shown in FIG. 41C and/or vary a degree of processing of the effect, e.g., lighten a color of the line. Moreover, both an area of a portion surrounded by the ellipse as the effect and a pattern of the line may be changed in accordance with a movement speed of the object of interest (Paragraph 244). Therefore taking the combined teachings of Iijima and Watanabe, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have emphasized, in a case where a speed of a subject detected in the through image is a threshold or higher, a larger frame out of the plurality of frames in order to emphasize the speed of the object based on the type of the frame. Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Iijima et al. (US PGPUB 20100277620) in view of Anthony et al. (US PGPUB 20130243408). [Claim 16] Iijima fails to teach wherein a frame is displayed in a translucent state on the display. However Anthony teaches Translucent/disappearing control bar 910 may be used in some embodiments to maximize the display size of video display area 220 at a given aspect ratio. Translucent/disappearing control bar 910 may be configured to allow video information to show through the control bar 910 (Paragraph 68). Therefore taking the combined teachings of Iijima and Anthony, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have a frame displayed in a translucent state on the display in order to maximize the display size by allowing information to show through the translucent state. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOGESH K AGGARWAL whose telephone number is (571)272-7360. The examiner can normally be reached Monday - Friday 9:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 5712727564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YOGESH K AGGARWAL/Primary Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Sep 21, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604079
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604100
IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598265
COOPERATIVE PHOTOGRAPHING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12587735
IMAGING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579842
METHOD FOR ADAPTING THE QUALITY AND/OR FRAME RATE OF A LIVE VIDEO STREAM BASED UPON POSE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
96%
With Interview (+6.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month