Prosecution Insights
Last updated: April 19, 2026
Application No. 18/584,943

INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND STORAGE MEDIUM

Final Rejection §103§112
Filed
Feb 22, 2024
Examiner
BEUTEL, WILLIAM A
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
328 granted / 469 resolved
+7.9% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 469 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Amendments Claims 17 and 18 have been amended to correct for the rejections made pursuant to 35 U.S.C. 101, and as such the rejection has been withdrawn. Response to Arguments Applicant's arguments filed 2/20/2026 have been fully considered but they are not persuasive in part. First, Applicant has not addressed the rejections made pursuant to 35 U.S.C. 112(b) for the terms recited in claims 1-20. The rejections remain pending. Second, Applicant argues that claims 1-16 have been amended to remove the interpretation under 35 U.S.C. 112(f). Examiner respectfully disagrees. When the claim limitation does not use the term “means,” examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term “means”). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): “mechanism for,” “module for,” “device for,” “unit for,” “component for,” “element for,” “member for,” “apparatus for,” “machine for,” or “system for.”Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Mass. Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). "The standard is whether the words of the claim are understood by persons of ordinary skill in the art to have a sufficiently definite meaning as the name for structure." Williamson v. Citrix Online, LLC, 792 F.3d 1339, 1349, 115 USPQ2d 1105, 1111 (Fed. Cir. 2015). For a term to be considered a substitute for “means,” and lack sufficient structure for performing the function, it must serve as a generic placeholder and thus not limit the scope of the claim to any specific manner or structure for performing the claimed function. MPEP 2181(I)(A). Clam 1, as currently recited, merely states that instructions cause the apparatus to function as a series of units. The claim, however, does not tie any particular structural element to the units (processors or otherwise). Instead the claim merely recites the units as performed by instructions without structure. Accordingly, claim 1, and claims 2-20 dependent thereon, invoke interpretation under 35 USC 112(f). As such, the rejections under 35 USC 112 are also still appropriate as the specification fails to include adequate disclosure of corresponding structure. Applicant’s arguments, see applicant’s correspondence filed 2/20/2026, with respect to the rejection(s) of claim(s) 1-18 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Noguchi. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an image input unit”, “a character input unit”, “an acceptance unit”, and “a poster generation unit” in claim 1 and incorporated by reference to dependent claims 2-16, and “a setting unit” recited in claim 19 and incorporated by reference to claim 20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim(s) 1-16 and 19-20 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim(s), applicant recites the generic terms an image input unit and a character input unit, which invoke 35 U.S.C. 112(f) and which the specification fails to provide adequate clarification as to the structures which perform the recited functions corresponding to the generic terms recited in the claim(s). As such, the claim(s) attempts to cover any and all structures or algorithms which perform the recited functions. As such, the specification does not reasonably convey to one of ordinary skill in the art that the applicant had possession of the claimed invention, failing to comply with the written description requirement. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Rejection based on invocation of 112(f): Regarding claim 1, the claim recites “a poster generation unit configured to generate a poster based on the image, the character, and the target impression” in lines 6-7 of the claim. As the claim is currently drafted, the claim invokes interpretation under 35 U.S.C. 112(f) for the corresponding structure of the “poster generation unit.” As discussed above explaining the corresponding structures incorporated into the claim by reference from applicant’s specification, the “poster generation unit” can be either of two versions, where Spec. ¶52 discloses the unit as corresponding to the structure of Figure 2, and Spec. ¶176 discloses the unit as corresponding to the structure of Figure 17. As such, the claim is required to include one or the other structures, or equivalents thereof. The two alternatives, however, have overlapping elements. One of those overlapping structures is the algorithm for the “layout unit”, which is in both alternatives as element 217. Applicant’s specification discloses the “The layout unit 217 then applies the coloration pattern(s) acquired from the coloration pattern selection unit 215, and applies the font pattern(s) acquired from the font selection unit 216.” Spec. [0069]. This, however, results in the claim being indefinite, as the claim requires a “layout unit” in both alternatives (i.e. Fig. 2 and Fig. 17), but does not requires the coloration pattern selection unit nor the font selection unit in the second alternative (i.e. Fig. 17). As a result, the claim is rendered indefinite as to what is intended to comprise the structure of the “layout unit” when the second alternative is used, and further fails to disclose any corresponding structure for the layout unit 217 for the structure provided in Fig. 17. As such, one of ordinary skill in the art would not be reasonably apprised of the scope of the claimed invention and therefore the claim is indefinite. Further regarding claim 1, the corresponding structure for the poster further includes an ““image analysis unit” and therefore incorporates the corresponding structure (i.e. computer implemented algorithm for performing the computer function – see MPEP 2181(II)(B)). The specification recites a processor programmed with modules to perform the functions, where the corresponding algorithm is disclosed in Spec. ¶61 which states the image analysis unit “performs image data analysis processing on the image data acquired from the image acquisition unit 211 using a method to be described below.” The specification, however, does not clearly describe what the image data analysis is performed, and therefore the claim is rendered indefinite as it is unclear what the corresponding structure is for the recited poster generation unit. Further regarding claim 1, as described above, the disclosure does not provide adequate structure to perform the claimed function of an image input unit and a character input unit. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. Without clarifying the corresponding structure, the claim is indefinite as to the intended scope of the claimed units. Claims 2-16 and 19-20 depend from claim 1 and therefore are indefinite for the same reasons as claim 1, as the claims do not further clarify the corresponding structure. Rejection based on indefinite language other than under 112(f): Regarding claim 1, the claim recites “a poster generation unit configured to generate a poster” in line 6 of the claim. The term “poster” is unclear as to its intended meaning. Applicant’s specification discloses: “The poster display unit 205 outputs a poster image to be displayed on the display 105 based on the poster data acquired from the poster selection unit 219. An example of the poster image is bitmap data. The poster display unit 205 displays the poster image on the display 105.” Spec. ¶72. This appears to indicate that the “poster” is merely a generated image, but the difference in term usage is confusing as applicant does not explicitly state that a poster image in the claim. Furthermore, the specification goes on to state, “If the poster generation application has a function of printing the poster data stored in the HDD 104 using a printer based on a condition designated by the poster generation condition designation unit 201, the user can obtain a print product of the generated poster.” Spec. ¶74. This language in the specification seems to indicate that the poster data stored in HDD is the “poster” rather than the image, as this is used by a user to obtain a print product of the “generated poster.” As such, the image data is not the poster, but rather merely the data alone. But the use in the specification of both “poster” and “poster data” which indicates that there is some difference. Accordingly, the use of the term “poster” renders the claim indefinite as to what is generated (i.e. is this merely just organized data in memory, is this an image, or does the applicant mean some third meaning for just the term “poster” different from “poster data” and “poster image”, such as the actual printed product which is more in-line with the common plain meaning of the term “poster”?). Claims 2-16 depend from claim 1 and incorporate the same indefinite language as recited in claim 1. Claims 17 and 18 are rendered indefinite for using the same indefinite term “poster” and are rejected based on the same rationale as claim 1 set forth above. Claim 1 recites, “generate a poster based on … the target impression” without specifying how a particular impression is achieved. The impression is an idea, feeling or opinion about something or someone that is subjective to each and every person individually. By requiring the generation of a poster to be achieved based on a subjective idea, feeling or opinion, the claim is rendered indefinite as the scope of what is claimed is related to a subjective, relative term that cannot be identified with any reasonable precision. Accordingly, the claim is rendered indefinite for the use of the target impression as the scope of what poster is generated is unclear. Claims 2-16 and 19-20 depend from claim 1 and incorporate the same indefinite language as recited in claim 1. Claims 17 and 18 are rendered indefinite for using the same indefinite term “impression” and are rejected based on the same rationale as claim 1 set forth above. Regarding claim 7, the claim recites “wherein the impression terms express impressions a poster gives.” The claim does not specify how to determine what impressions a poster gives. This appears to be a relative term based on a person’s own personal perspective and judgement, which is not quantifiable and therefore can not be reasonably apprised as to the scope of its meaning. Applicant’s specification is replete with disclosure of numerical values used for impression values (e.g. ¶143), but as to how the values are obtained in the first place is unclear or at most explicitly recited as based on subjective intent of people (e.g. ¶94). There is no clear guidance as to how these values are obtained, quantized or otherwise obtained that would reasonable apprise one of ordinary skill in the art as to the scope of the impressions claimed. Accordingly, it is not clear what constitutes “the impression terms express impressions a poster gives” for purposes of interpretation as this requires individual subjective interpretation of a person. Accordingly, the claim is indefinite as there is no clear scope as to what is intended by the express impressions a poster gives. Regarding claim 12, the claim recites “wherein poster images with different target impressions are generated if the poster images include the character or the image and differ in a method of arrangement of the character or the image.” As currently drafted, it is unclear whether the different target impressions are generated if the poster images different in method of arrangement of characters or the image, or the claim intends to recite wherein the poster images with different target impressions are generated … and different in a method of arrangement of the character or the image. Examiner respectfully requests applicant clarify the claim. For purposes of interpretation, the claim is interpreted as merely requiring the target impressions different in a method of arrangement of the character or image. Regarding claim 17, the claim recites “UI” without any corresponding indication of the meaning of the abbreviation. Accordingly, the claim is rendered indefinite. For purposes of interpretation, UI will be interpreted as “user interface” (as in claim 1). Regarding claim 18, the claim recites “UI” without any corresponding indication of the meaning of the abbreviation. Accordingly, the claim is rendered indefinite. For purposes of interpretation, UI will be interpreted as “user interface” (as in claim 1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Furuya (US 2017/0084066 A1) in view of Quek et al. (US 2006/0181736 A1) and Agnoli et al. (US 2017/0192627 A1) in further view of Noguchi (US 2019/0371027 A1). Regarding claim 1, Furuya discloses: An information processing apparatus comprising: one or more processors; one or more memories storing instructions to cause the information processing apparatus to function (Furuya, Fig. 1 shows a distributed apparatus, including image compositing server connected to smartphone and printer; ¶¶63-67 discloses the computer architecture for providing software operations, including server 20 controlled by CPU 21, and image compositing server 20 includes a memory 22 for storing data temporarily, a compact-disc drive 24 for accessing a compact disc 23, on which a program for controlling operation, has been stored is loaded in the image compositing server 20 and the program that has been stored on the compact disc 23 is read by the compact-disc drive 24) as An image inputting unit configured to input an image; (Furuya, ¶78: The user operates the smartphone 1 and selects a target image; ¶99: The target image I1 to be combined with a template is selected (it is assumed here that the target image I1 has been selected) by the user (step 41) and target image data representing the selected target image I1 is transmitted from the smartphone 1 to the image compositing server 20 (step 42).) An acceptance unit configured to accept designation of a target impression by a user (Furuya, ¶63 discloses touch-sensitive panel for input; ¶99: CPU 21 selects the template for which the discrepancy with respect to the impression evaluation value of the target image I1 is less than a threshold value (step 53), e.g. Fig 6 templates T5, T3 and T4 have been selected, and in response, template image data representing each of the selected templates T5, T3 and T4 is transmitted from the image compositing server 20 to the smartphone 1 (step 81); Fig. 11 and ¶100: template image data sent from compositing server to smartphone, and displayed on display screen of smartphone; ¶103: By touching the desired template from among the templates T5, T3 and T4 being displayed on the display screen 60, the user designates the template (step 73), and in response, identification data identifying the designated template is transmitted from the smartphone 1 to the image compositing server 20 (step 74)); Examiner notes that the user selecting a template by itself is a “designation of a target impression by a user”, as user is choosing a template they desire which under BRI is read on by the broad claim language, but Furuya also discloses a narrower interpretation that selecting an impression is tied to something tracked by the computer itself, as each template is also matched to a corresponding impression – see Fig. 5 and ¶73 - and as such selecting a template also selects an impression in the sense of what the computer tracks); and a poster generating unit configured to generate a poster based on the image, (Furuya, ¶104: When the identification data sent from the smartphone 1 is received by the image compositing server 20 (step 82), the CPU 21 (target image combining unit) combines the target image with the combining area of the template specified by the received identification data (step 83), the composite image is printed by the printer 29 (step 84) and the printed composite image is delivered to the residence of the user.) wherein the acceptance unit is configured to accept the designation of the target impression via a (Furuya, Fig. 11 and ¶100: template image data sent from compositing server to smartphone, and displayed on display screen of smartphone; ¶103: By touching the desired template from among the templates T5, T3 and T4 being displayed on the display screen 60, the user designates the template (step 73), and in response, identification data identifying the designated template is transmitted from the smartphone 1 to the image compositing server 20 (step 74)) in which a plurality of impression[s] (Furuya, ¶92: templates selected by CPU 21 in order if increasing discrepancy; ¶93: the composite images are displayed on the display screen of the smartphone 1 in order of increasing discrepancy between the impression evaluation values (step 44) – Fig. 9; Fig. 10 and ¶¶100-102 discloses displaying just the template image data in order of increasing discrepancy – i.e. “relative impression similarity”) and wherein the poster generation unit is configured to generate the poster by adjusting one or more design elements in accordance with the target impression corresponding to a position designated on the (Furuya, ¶¶103-104: by touching the desired template from among the templates displayed on the screen, user designates a template and in response identification data identifying the designated template is transmitted to image compositing server, where CPU 21 (target image combining unit) combines the target image I1 with the combining area of the template specified by the received identification data (step 83) and the composite image is printed by the printer 29 – note Fig. 9 shows changes of image shape and location along with different accompanying design elements based on template) Furuya does not explicitly disclose the inputting a character and generating a poster based on the character. Quek discloses: A character input unit configured to input a character (Quek, ¶86: template 1200 can also include one or more text receiving areas 1225 for the user to enter text to form text object in the image collage to be created; ¶96: computer software in computer system, including processor and programs stored on storage medium or device readable to operate device) and a poster generation unit configured to generate a poster based on the image, the character, and the target style (Quek, Fig. 10 and ¶83: user selects collage style and selects images for selected image collage; ¶86: user can change collage template; Fig. 12 and ¶87 discusses images and text edited on collage template; ¶90: The data structure 1300 allows the rending of an image collage to produce an intact digital image, for example, in bitmap or JPEG format where the image and text objects are fixed in the intact digital image; ¶96: computer software in computer system, including processor and programs stored on storage medium or device readable to operate device) Both Furuya and Quek are directed to computer software for generating a poster image based on a template and user’s media inputs. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuay, by incorporating text within the template for producing a media printout as provided by Quek, using known electronic interfacing and programming techniques. The modification of adding text to a printable media that is formed from user input content merely uses a known technique of including text in addition to images in a template to generate media to improve a base device that uses templates to generate media. The modification yields predictable results of including both images and text (e.g. disclosed by Quek) for generating a combined media computer graphic. The modification also results in an improved print media by allowing additional content commonly desired on such printable media, i.e. text, along with the images to allow for more creativity and better tailoring to user’s desired preferences. Furuya modified by Quek does not explicitly disclose the particular layout of the user interface in a “ring-shaped operation user interface.” This appears to merely be a design choice that has no bearing on the functionality of the claimed invention but instead is merely an aesthetic choice of the designer, and therefore no patentable weight. However, Agnoli discloses: wherein the acceptance unit is configured to accept the designation by the user via a ring-shaped operation user interface (UI) (Agnoli, Figs. 6A and 6B and ¶177: radial menu 610 including selectable options 612) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuya, incorporating text within the template for producing a media printout as provided by Quek, by further using a ring-shaped operation user interface for designating user selections as provided by Agnoli, using known electronic interfacing and programming techniques. The modification merely substitutes one known shape for displaying a series of selectable items (i.e. grid) for another (i.e. circular), to obtain predictable results of using a ring-shaped menu for selecting options presented as a series of icons. The different shaped menus are known in the art as shown by the references cited and one of ordinary skill in the art would have found the substitution predictable as it merely lays out the icons in a different pattern on display. The only limitation not explicitly taught by Furuya modified by Quek and Agnoli is that the user interface discloses “terms” as opposed to images representing the selectable controls for changing a design. Noguchi, however, teaches that it was known to provide a user interface that provides guidance for controlling an impression of a modified image using impression terms (Noguchi, Fig. 5 and ¶59 discloses sensibility words indicating a distribution of impression terms; Fig. 8 and ¶69 discloses displaying impression terms on a selectable UI controller to select a particular impression, including end points at particular impressions, e.g. “cute” vs. “elegant”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuya, incorporating text within the template for producing a media printout as provided by Quek, using a ring-shaped operation user interface for designating user selections as provided by Agnoli, by substituting text terms for selecting particular impressions for modifying an image using a template as opposed to graphical depictions or icons, as provided by Noguchi, using known electronic interfacing and programming techniques. The modification merely substitutes one known visual representation of information for selection for another, namely text information as opposed to graphical depictions or pictographs, yielding predictable results of using language in place of images for conveying information in a user interface. The modification allows for an alternative depiction of information for displaying controls to a user, improving upon merely providing images by utilizing descriptive language that might be more concise and clear. Regarding claim 18, Furuya discloses: A non-transitory computer-readable storage medium storing computer executable instructions that, when executed by a computer, cause the computer to perform a control method, the control method comprising: (Furuya, ¶¶63-67 discloses the computer architecture for providing software operations, including server 20 controlled by CPU 21, and image compositing server 20 includes a memory 22 for storing data temporarily, a compact-disc drive 24 for accessing a compact disc 23, on which a program for controlling operation, has been stored is loaded in the image compositing server 20 and the program that has been stored on the compact disc 23 is read by the compact-disc drive 24) inputting an image; (Furuya, ¶78: The user operates the smartphone 1 and selects a target image; ¶99: The target image I1 to be combined with a template is selected (it is assumed here that the target image I1 has been selected) by the user (step 41) and target image data representing the selected target image I1 is transmitted from the smartphone 1 to the image compositing server 20 (step 42).) accepting designation of a target impression by a user (Furuya, ¶99: CPU 21 selects the template for which the discrepancy with respect to the impression evaluation value of the target image I1 is less than a threshold value (step 53), e.g. Fig 6 templates T5, T3 and T4 have been selected, and in response, template image data representing each of the selected templates T5, T3 and T4 is transmitted from the image compositing server 20 to the smartphone 1 (step 81); Fig. 11 and ¶100: template image data sent from compositing server to smartphone, and displayed on display screen of smartphone; ¶103: By touching the desired template from among the templates T5, T3 and T4 being displayed on the display screen 60, the user designates the template (step 73), and in response, identification data identifying the designated template is transmitted from the smartphone 1 to the image compositing server 20 (step 74)); Examiner notes that the user selecting a template by itself is a “designation of a target impression by a user”, as user is choosing a template they desire which under BRI is read on by the broad claim language, but Furuya also discloses a narrower interpretation that selecting an impression is tied to something tracked by the computer itself, as each template is also matched to a corresponding impression – see Fig. 5 and ¶73 - and as such selecting a template also selects an impression in the sense of what the computer tracks); and generating a poster based on the image, (Furuya, ¶104: When the identification data sent from the smartphone 1 is received by the image compositing server 20 (step 82), the CPU 21 (target image combining unit) combines the target image with the combining area of the template specified by the received identification data (step 83), the composite image is printed by the printer 29 (step 84) and the printed composite image is delivered to the residence of the user.) wherein the designation of the target impression via a (Furuya, Fig. 11 and ¶100: template image data sent from compositing server to smartphone, and displayed on display screen of smartphone; ¶103: By touching the desired template from among the templates T5, T3 and T4 being displayed on the display screen 60, the user designates the template (step 73), and in response, identification data identifying the designated template is transmitted from the smartphone 1 to the image compositing server 20 (step 74)). in which a plurality of impression[s] (Furuya, ¶92: templates selected by CPU 21 in order if increasing discrepancy; ¶93: the composite images are displayed on the display screen of the smartphone 1 in order of increasing discrepancy between the impression evaluation values (step 44) – Fig. 9; Fig. 10 and ¶¶100-102 discloses displaying just the template image data in order of increasing discrepancy – i.e. “relative impression similarity”) and wherein the poster generation unit is configured to generate the poster by adjusting one or more design elements in accordance with the target impression corresponding to a position designated on the (Furuya, ¶¶103-104: by touching the desired template from among the templates displayed on the screen, user designates a template and in response identification data identifying the designated template is transmitted to image compositing server, where CPU 21 (target image combining unit) combines the target image I1 with the combining area of the template specified by the received identification data (step 83) and the composite image is printed by the printer 29 – note Fig. 9 shows changes of image shape and location along with different accompanying design elements based on template) Furuya does not explicitly disclose the inputting a character and generating a poster based on the character. Quek discloses: inputting a character (Quek, ¶86: template 1200 can also include one or more text receiving areas 1225 for the user to enter text to form text object in the image collage to be created) and generating a poster based on the image, the character, and the target style (Quek, Fig. 10 and ¶83: user selects collage style and selects images for selected image collage; ¶86: user can change collage template; Fig. 12 and ¶87 discusses images and text edited on collage template; ¶90: The data structure 1300 allows the rending of an image collage to produce an intact digital image, for example, in bitmap or JPEG format where the image and text objects are fixed in the intact digital image) Both Furuya and Quek are directed to computer software for generating a poster image based on a template and user’s media inputs. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuay, by incorporating text within the template for producing a media printout as provided by Quek, using known electronic interfacing and programming techniques. The modification of adding text to a printable media that is formed from user input content merely uses a known technique of including text in addition to images in a template to generate media to improve a base device that uses templates to generate media. The modification yields predictable results of including both images and text (e.g. disclosed by Quek) for generating a combined media computer graphic. The modification also results in an improved print media by allowing additional content commonly desired on such printable media, i.e. text, along with the images to allow for more creativity and better tailoring to user’s desired preferences. Furuya modified by Quek does not teach the particular layout of the user interface in a “ring-shaped operation user interface.” This appears to merely be a design choice that has no bearing on the functionality of the claimed invention but instead is merely an aesthetic choice of the designer, and therefore no patentable weight. However, Agnoli discloses: wherein the acceptance unit is configured to accept the designation by the user via a ring-shaped operation user interface (UI) (Agnoli, Figs. 6A and 6B and ¶177: radial menu 610 including selectable options 612) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuay, incorporating text within the template for producing a media printout as provided by Quek, by further using a ring-shaped operation user interface for designating user selections as provided by Agnoli, using known electronic interfacing and programming techniques. The modification merely substitutes one known shape for displaying a series of selectable items (i.e. grid) for another (i.e. circular), to obtain predictable results of using a ring-shaped menu for selecting options presented as a series of icons. The different shaped menus are known in the art as shown by the references cited and one of ordinary skill in the art would have found the substitution predictable as it merely lays out the icons in a different pattern on display. The only limitation not explicitly taught by Furuya modified by Quek and Agnoli is that the user interface discloses “terms” as opposed to images representing the selectable controls for changing a design. Noguchi, however, teaches that it was known to provide a user interface that provides guidance for controlling an impression of a modified image using impression terms (Noguchi, Fig. 5 and ¶59 discloses sensibility words indicating a distribution of impression terms; Fig. 8 and ¶69 discloses displaying impression terms on a selectable UI controller to select a particular impression, including end points at particular impressions, e.g. “cute” vs. “elegant”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuya, incorporating text within the template for producing a media printout as provided by Quek, using a ring-shaped operation user interface for designating user selections as provided by Agnoli, by substituting text terms for selecting particular impressions for modifying an image using a template as opposed to graphical depictions or icons, as provided by Noguchi, using known electronic interfacing and programming techniques. The modification merely substitutes one known visual representation of information for selection for another, namely text information as opposed to graphical depictions or pictographs, yielding predictable results of using language in place of images for conveying information in a user interface. The modification allows for an alternative depiction of information for displaying controls to a user, improving upon merely providing images by utilizing descriptive language that might be more concise and clear. Regrading claim 17, the claim recites the same method performed by the control method of claim 18, and as such claim 17 is rejected based on the same rationale as claim 18 above. Regarding claim 2, Furuya further discloses: a plurality of impression terms (Furuya, Fig. 5 discloses the impression terms included in the table of template impression evaluation values, e.g. Gender Masculinity and level) Furuay modified by Quek, and Agnoli further discloses wherein the ring shaped operation UI includes a plurality of terms arranged in a ring configuration (Agnoli, Fig. 12A discloses ring of plurality of terms, e.g. color, font, size, style) Furuya is modifiable by Quek, and Agnoli for the same reasons as claim 18 set forth above. Regarding claim 3, Furuya discloses: wherein the plurality of impression terms includes at least a first impression term, a second impression term, and a third impression term, (Furuya, Fig. 5 discloses the impression terms included in the table of template impression evaluation values, e.g. Gender Masculinity and level) Agnoli further discloses, wherein the ring-shaped operation UI includes the first and second terms that are arranged next to each other and the first and third terms that are not arranged next to each other. (Agnoli, Fig. 12A discloses ring of plurality of terms, e.g. color, font, size – i.e. not next to each other on ring) Furuya is modifiable by Quek, and Agnoli for the same reasons as claim 18 set forth above. Regarding claim 4, Furuya further discloses: wherein a difference between an impression value of a poster generated with the first impression term and an impression value of a poster generated with the second impression term is smaller than a difference between the impression value of the poster generated with the first impression term and an impression value of a poster generated with the third impression term (Furuya, ¶¶91-92 discloses template impression evaluation for target image, where template selection unit displays selected templates in order of increasing discrepancy) Regarding claim 5, Furuya further discloses: wherein the poster generation unit is configured to cause a linear change in a design element of a poster generated with the first impression term and that of a poster generated with the second impression term. (Furuya, ¶88: The table of target image impression evaluation values shown in FIG. 8 can be thought of as containing impression evaluation values that correspond to coordinate values obtained by taking respective ones of the four types of impression evaluation value as axes in a manner similar to the table of template impression evaluation values shown in FIG. 5 – i.e. impression values along an axis is a linear change; Also ¶90-91 discloses further calculations) Regarding claim 6, Furuya further discloses: wherein the design element is at least one of the following: a picture size, a picture position, a color tone after picture processing, a graphics tone, a font shape, a font weight, a title size, a title position, and a title tilt. (Fuyura, Fig. 4 and ¶69 discloses different templates with different picture positions and size, e.g. T1, T2 and T5 showing change in size and shape; Also see Fig. 9) Regarding claim 7, Furuya further discloses: wherein the impression terms express impressions a poster gives (Furuya, ¶99: templates associated with different impressions; Fig. 5 discloses the impression terms included in the table of template impression evaluation values, e.g. Gender Masculinity and level) Regarding claim 8, Furuya further discloses: wherein the impression terms include at least one of the following: stately, vigorous, pop, peaceful, elegant, and luxurious (Furuya, Fig. 5 discloses e.g. youthfulness) Furthermore, with regard to claim 8, Furuya describes using different impression terms associated with different templates (e.g. Fig. 5). Furuya does not explicitly recite the identical terms as recited by claim 8, i.e. stately, vigorous, pop, peaceful, elegant, and luxurious. However, it would have been obvious to one of ordinary skill in the art at the time of the invention to include in Furuya the use of different terms as related to impressions of the templates, as the applicant has not disclosed that using the specific terms “stately, vigorous, pop, peaceful, elegant, and luxurious” provides an advantage, solves any stated problem, or is for any particular purpose as opposed to other categories or terms. Furthermore, it appears that software for template matching to impressions of Furuya would perform equally well for the recited functions for matching templates to different impressions regardless of the impressions that are used by the system. Instead, the particular term used appears to merely be a design choice (i.e. choice to use one impression or category of data over another, but still using the same functional aspect of the invention which is to associate templates with impressions, regardless of what impressions the designer would like to use). Accordingly, the use of the particular terms, “stately, vigorous, pop, peaceful, elegant, and luxurious” as opposed to any other impression term (e.g. masculinity, youthfulness, seriousness) is deemed to be a design consideration which fails to patentably distinguish over the prior art of Furuya. Regarding claim 9, Furuya further discloses: wherein the target impression is determined by a combination of factors representing impressions (Furuya, ¶84: impression evaluation with regard to target image based on face, clothing, hair; ¶¶85-87 discusses the different factors for determining an impression; ¶112: impression for vitality in addition to other factors) Regarding claim 10, Fuyura further discloses: wherein poster images with different target impressions are generated if the poster images differ in coloration. (Fuyura, ¶29: stores template color distribution data with regard to multiple templates and stores template impression evaluation values in correspondence with the color distribution data; Fig. 16 and ¶119: color distribution data with templates associated with different color distributions and different impressions, used for generating final images; also ¶119: Templates T11 and T16 have the same color distribution data, but, since they are different templates, their layouts and color placements differ) Regarding claim 11, Fuyura further discloses: wherein poster images with different target impressions are generated (Fuyura, Fig. 4 and ¶69 discloses different templates with different picture positions and size, e.g. T1, T2 and T5 showing change in size and shape; Also see Fig. 9) The claim further recites, “if the poster images include the character and differ in a font of the character”, where Fuyura does not explicitly teach the use of character or font. This limitation, however, is merely stating a result, where different poster images are generated having different target impressions, which may include text. In other words, the text can be on the poster or not, so long as the posters themselves are generated with different impressions. What the claim does not require is that the character and font determine the impression itself (i.e. there is no functional correlation between having a charter and font, but rather if there is character/font included, the character/font is included in the poster images that are otherwise with different target impressions, and the character/font is not the driving feature of the impression). Furuya modified by Quek further discloses: wherein poster images are generated if the poster images include the character and differ in a font of the character (Quek, Fig. 10 and ¶83: user selects collage style and selects images for selected image collage; ¶86: user can change collage template; Fig. 12 and ¶87 discusses images and text edited on collage template, where the user can also change the font type, the font size, the color, or format of the text in the text object 1225; ¶90: The data structure 1300 allows the rending of an image collage to produce an intact digital image, for example, in bitmap or JPEG format where the image and text objects are fixed in the intact digital image) Both Furuya and Quek are directed to computer software for generating a poster image based on a template and user’s media inputs. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuay, by incorporating text within the template for producing a media printout as provided by Quek, using known electronic interfacing and programming techniques. The modification of adding text to a printable media that is formed from user input content merely uses a known technique of including text in addition to images in a template to generate media to improve a base device that uses templates to generate media. The modification yields predictable results of including both images and text (e.g. disclosed by Quek) for generating a combined media computer graphic. The modification also results in an improved print media by allowing additional content commonly desired on such printable media, i.e. text, along with the images to allow for more creativity and better tailoring to user’s desired preferences. Regarding claim 12, Furuya further discloses: wherein poster images with different target impressions are generated if the poster images include the character or the image and differ in a method of arrangement of the character or the image. (Furuya, Figs. 9 and 11 show templates of different impressions having a different arrangement of the inserted media, or image; ¶99 discloses templates associated with different impressions) Regarding claim 13, Furuya further discloses: wherein information indicating a difference between an impression the poster generated by the poster generation unit produces and the designated target impression is less than a predetermined threshold. (Furuya, ¶90: templates selected based on discrepancy with respect to calculated impression evaluation value of target image being less than a threshold value) Regarding claim 14, Furuya further discloses: wherein a poster image is generated based on a skeleton, the skeleton being information indicating arrangement of the (Furuya, Fig. 4 and ¶70: templates with target image portions, shown varying in location, size, and shape based on template – i.e. layout is “skeleton” which is included in template; Fig. 5 and ¶73: Template impressions and template impression evaluation values have been stored in the table for template impression evaluation values on a per-template basis with regard to a plurality of templates – i.e. changing a template changes layout of elements, or skeleton, and because each template is associated with a different impression, changing the template, which varies the layout/skeleton therefore also generates different target impressions; ¶119: templates having layouts) Quek further discloses: wherein a poster image is generated based on a skeleton, the skeleton being information indicating arrangement of the character, the image, and graphics in the poster image (Quek, Fig. 13 and ¶¶88-89: data structure for image collage, including text objects and image objects, as well as background theme, and their respective locations) Both Furuya and Quek are directed to computer software for generating a poster image based on a template and user’s media inputs. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuay, by incorporating text within the template for producing a media printout as provided by Quek, using known electronic interfacing and programming techniques. The modification of adding text to a printable media that is formed from user input content merely uses a known technique of including text in addition to images in a template to generate media to improve a base device that uses templates to generate media. The modification yields predictable results of including both images and text (e.g. disclosed by Quek) for generating a combined media computer graphic. The modification also results in an improved print media by allowing additional content commonly desired on such printable media, i.e. text, along with the images to allow for more creativity and better tailoring to user’s desired preferences. Regarding claim 15, Furuya further discloses: wherein the poster image is generated with t(Furuya, Fig. 4 and ¶70: templates with target image portions, shown varying in location, size, and shape based on template – i.e. layout is “skeleton” which is included in template; Fig. 11 and Fig. 16 and ¶119: color distribution data with templates associated with different color distributions and different impressions, used for generating final images; also ¶119: Templates T11 and T16 have the same color distribution data, but, since they are different templates, their layouts and color placements differ) Quek further discloses: wherein the poster image is generated with the character and the image arranged on a template into which the skeleton is combined with coloration and a character font in the poster image (Quek, Fig. 13 and ¶¶88-89: data structure for image collage, including text objects and image objects, as well as background theme, and their respective locations; Fig. 12 and ¶87 discusses images and text edited on collage template; ¶90: The data structure 1300 allows the rending of an image collage to produce an intact digital image, for example, in bitmap or JPEG format where the image and text objects are fixed in the intact digital image) Both Furuya and Quek are directed to computer software for generating a poster image based on a template and user’s media inputs. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuay, by incorporating text within the template for producing a media printout as provided by Quek, using known electronic interfacing and programming techniques. The modification of adding text to a printable media that is formed from user input content merely uses a known technique of including text in addition to images in a template to generate media to improve a base device that uses templates to generate media. The modification yields predictable results of including both images and text (e.g. disclosed by Quek) for generating a combined media computer graphic. The modification also results in an improved print media by allowing additional content commonly desired on such printable media, i.e. text, along with the images to allow for more creativity and better tailoring to user’s desired preferences. Regarding claim 16, Furuya further discloses: a skeleton close to the target impression and coloration in the poster image selected (Quek, Fig. 10 and ¶83: user selects collage style and selects images for selected image collage; ¶86: user can change collage template; Fig. 12 and ¶87 discusses images and text edited on collage template; ¶90: The data structure 1300 allows the rending of an image collage to produce an intact digital image, for example, in bitmap or JPEG format where the image and text objects are fixed in the intact digital image; ¶96: computer software in computer system, including processor and programs stored on storage medium or device readable to operate device; ¶119: Templates T11 and T16 have the same color distribution data, but, since they are different templates, their layouts and color placements differ) Quek further discloses: wherein a skeleton and coloration and a character font in the poster image are separately selected and the poster image is generated based on a combination of the selected skeleton with the selected coloration and the selected character font in the poster image. Quek, Fig. 10 and ¶83: user selects collage style and selects images for selected image collage; ¶86: user can change collage template; Fig. 12 and ¶87 discusses images and text edited on collage template, including colors; ¶¶88-89: data structure for image collage, including text objects and image objects, as well as background theme, and their respective locations; Fig. 13 showing structure including color enhancement; ¶90: The data structure 1300 allows the rending of an image collage to produce an intact digital image, for example, in bitmap or JPEG format where the image and text objects are fixed in the intact digital image) Both Furuya and Quek are directed to computer software for generating a poster image based on a template and user’s media inputs. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuay, by incorporating text and other characteristics within the template for producing a media printout as provided by Quek, using known electronic interfacing and programming techniques. The modification of adding text to a printable media that is formed from user input content merely uses a known technique of including text in addition to images in a template to generate media to improve a base device that uses templates to generate media. The modification yields predictable results of including both images, text and other aesthetic characteristics (e.g. disclosed by Quek) for generating a combined media computer graphic. The modification also results in an improved print media by allowing additional content commonly desired on such printable media, i.e. text, along with the images to allow for more creativity and better tailoring to user’s desired preferences. Regarding claim 19, Furuya modified by Quek, Agnoli, and Noguchi further discloses: wherein the instructions further cause the information processing apparatus to function as: a setting unit configured to set category information of the poster generated by the poster generation unit, wherein the plurality of impression terms are selected based on the category information set by the setting unit (Noguchi, ¶59 discloses determining an axis for the dispersion of impression values among a plurality of impression axes, such that the terms are determined in advance - i.e. the groupings of terms along axes are categories of available term sets, i.e. distinct classes of entities, or division within a system of classification; Also Fig. 5 and ¶124 dividing impression areas in blocks and determining axis from positioning, where the impression area shown in FIG. 5 may be divided into a plurality of blocks such as 16 or 25 blocks, an impressions in the same block may be considered to have the same impression, and then, the above-described processes may be performed) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuya, incorporating text within the template for producing a media printout as provided by Quek, using a ring-shaped operation user interface for designating user selections as provided by Agnoli, substituting text terms as provided by Noguchi, along with the selection of terms based on additional organization of data into groupings of information as provided by Noguchi, using known electronic interfacing and programming techniques. The modification merely provides additional groupings of impressions to ensure better alignment of impression data for determining appropriate terms to provide to a user, providing an improved user interface for adjusting impressions that better align so that a user is provided with more relevant controls and is better able to understand and adjust the resulting images to their preferences. Regarding claim 20, Furuya further discloses: wherein the plurality of impression terms selected by the selection unit are arranged, in the(Furuya, ¶92: templates selected by CPU 21 in order if increasing discrepancy; ¶93: the composite images are displayed on the display screen of the smartphone 1 in order of increasing discrepancy between the impression evaluation values (step 44) – Fig. 9; Fig. 10 and ¶¶100-102 discloses displaying just the template image data in order of increasing discrepancy – i.e. “relative impression similarity”) The only element not taught is the use of a ring arrangement as opposed to some other organized arrangement of data that is ordered as provided by Furuya according to discrepancies based on impression evaluation values. The arrangement of an ordered list in a ring as opposed to merely some other ordered arrangement appears to be a design choice that does not have any limiting effect on the claim other than aesthetics, and is non-limiting. However, Agnoli further discloses: in the ring configuration on the ring-shaped operation UI (Agnoli, Figs. 6A and 6B and ¶177: radial menu 610 including selectable options 612) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the system and technique for generating the media printout using a template as provided by Furuya, incorporating text within the template for producing a media printout as provided by Quek, by further using a ring-shaped operation user interface for designating user selections as provided by Agnoli, using known electronic interfacing and programming techniques. The modification merely substitutes one known shape for displaying a series of selectable items (i.e. grid) for another (i.e. circular), to obtain predictable results of using a ring-shaped menu for selecting options presented as a series of icons. The different shaped menus are known in the art as shown by the references cited and one of ordinary skill in the art would have found the substitution predictable as it merely lays out the icons in a different pattern on display. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Feb 22, 2024
Application Filed
Nov 20, 2025
Non-Final Rejection — §103, §112
Feb 20, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581262
AUGMENTED REALITY INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572258
APPARATUS AND METHOD WITH IMAGE PROCESSING USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Patent 12566531
CONFIGURING A 3D MODEL WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561927
MEDIA RESOURCE DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12554384
SYSTEMS AND METHODS FOR IMPROVED CONTENT EDITING AT A COMPUTING DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 469 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month