Prosecution Insights
Last updated: April 19, 2026
Application No. 18/711,219

CONTENT GENERATION SYSTEM, CONTENT GENERATION METHOD, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
May 17, 2024
Examiner
LE, JOHNNY TRAN
Art Unit
2614
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+4.7% vs TC avg
Minimal -67% lift
Without
With
+-66.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/17/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Specification 1 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, requires the specification to be written in “full, clear, concise, and exact terms.” The specification is replete with terms which are not clear, concise and exact. The specification should be revised carefully in order to comply with 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112. Examples of some unclear, inexact or verbose terms used in the specification are: “extension material”. Claim Objections 2 Claim 16 is objected to because of the following informalities: The claim seemingly has a stray lowercase letter “i” within the claim (reciting “…selected background image i…”), and was not mentioned in any other claim that was connected to claim 16, nor in the specifications. It can be adjusted to state “…selected background image…”. Appropriate correction is required. Claim Rejections - 35 USC § 112 3 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4 Claims 4-5, 6, 10-11, 12, 16-17, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. 5 Claim 4 states near the beginning that the system “…display the material candidate and a selected background image…”. This suggests that claim 4 utilizes the material candidate mentioned in claim 1 (reciting “…display a material candidate including a basic material for which…”). However, later in claim 4, it states “…receive, as the selected material, a material candidate dragged on the background image…”, suggesting a new “material candidate” was made in difference with the “material candidate” defined in claim 1. Even further in claim 4, it states “…display the action candidate of the material candidate when the material candidate is dragged on the background image…”, making it unclear of which of the material candidates claim 4 is referring. There is antecedent language in this claim, making the claim to be rejected under 35 U.S.C. 112(b), and is assumed that it applies for any material candidate in later rejections for prior art. 6 A similar issue regarding defining a “material candidate”, creating a new “material candidate”, and not clearly explaining which of the “material candidates” is from the respective claim or their parent claim is also found in claims 6, 10, 12, 16, and 18, therefore it is rejected under the same rationale as claim 4. 7 Claims 5, 11, and 17 are dependent of claims 4, 10, and 16 respectfully, therefore they are rejected under the same rationale as claims 4, 10, and 16. Claim Rejections - 35 USC § 103 8 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 9 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10 Claim(s) 1-5, 7-11, and 13-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamamoto et al. (US 20130251341 A1) in view of Raley et al. (US 20090327137 A1). 11 Regarding claim 1, Yamamoto teaches a content generation system comprising ([0165] reciting “…the motion picture processing unit 306 of the server 3 determines whether or not it is ready to generate the motion picture image Q and can generate the motion picture image because the control points for the image of the subject G, the contents of motion of the image of the subject G…”): at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to ([0224] reciting “More specifically, a program including a motion picture image-obtaining processing routine, a character string-obtaining processing routine, and a control processing routine is stored to a program memory (not shown) for storing programs.”): receive a selected material selected from the material candidate ([0055] reciting “More specifically, the subject cropping unit 304 uses a publicly known subject cropping method to generate an image P1 obtained by cropping a region including the subject G from the subject-included image.”; [See Fig. 6A & 8B]); display an action candidate which is a candidate of an action of the selected material; receive a selected action selected from the action candidate ([0214] reciting “Further, in addition to the display mode of the character string W explained above, for example, when the tempo b of the BGM is faster or slower than a predetermined threshold value, the music-obtaining unit 306h may make performance of the display of the character string W by adding predetermined actions such as blinking the display, enlarging and reducing the display, and displaying with swinging while scrolling and displaying the character string W.”; [0058] reciting “The storage unit 305 is constituted by, for example, a semiconductor nonvolatile memory and an HDD (Hard Disc Drive), and stores, for example, page data of web pages and the subject-cropped image data of the subject G generated by the subject cropping unit 304, which are transmitted to the user terminal 2, and stores the motion picture data of the comment-attached motion picture image K in which the character string W to be displayed and scrolled in the predetermined direction is overlaid thereon, which is transmitted to the user terminal 2.”; [0062] reciting “Each piece of motion information M is different in the continuous movement of the plurality of movable points in accordance with the type of motion (for example, raising hand, lowering hand, raising leg, lowering leg) and variations (for example, walking, running, skipping, jumping).”); generate a moving image in which the selected material performs the selected action ([0037] reciting “In this case, the subject-included image means an image which is used as a foreground image during generation of a motion picture image Q explained later and which includes a main subject in a predetermined background.”). 12 Yamamoto does not explicitly teach to display a material candidate including a basic material for which a usage right is granted from a higher-level organization of a target organization and an extension material for which the target organization holds a usage right independently of the higher-level organization in an order determined according to a designated job. 13 Raley teaches to display a material candidate including a basic material for which a usage right is granted from a higher-level organization of a target organization and an extension material for which the target organization holds a usage right independently of the higher-level organization in an order determined according to a designated job ([Abstract] reciting “A method for creating a digital work having content and usage rights related to the content, the digital work being adapted to be used within a system having repositories for controlling use of content, the method including issuing a license to a consumer, the license permitting the consumer to access the content of the digital work to be created in the future.”; [0047] reciting “Also, the invention allows a newspaper editor, for example, to send a camera crew to record content without worrying about the pictures being compromised in any way (for example, altered, edited, viewed by unauthorized personnel, or hidden and separately sold to another newspaper organization). In fact, the camera crew may have no rights whatsoever in the content as soon as the content is recorded.”). 14 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yamamoto) to incorporate the teachings of Raley to provide a method that can utilize usage rights for many different organizations based on a certain type of priority for any digital work like to provide and display material candidates as stated by Yamamoto. Doing so would secure any content before the digital work is distributed as stated by Raley ([Abstract] recited). 15 Regarding claim 2, Yamamoto in view of Raley teaches the content generation system according to claim 1, wherein the at least one processor is further configured to execute the instructions to (see claim 1 rejection above): display the action candidate including a basic action candidate and an extension action candidate, the basic action candidate being the action candidate for which a usage right is granted by the higher- level organization, the extension action candidate being the action candidate for which the target organization holds a usage right independently of the higher-level organization (Raley; [Abstract] reciting “…electronically generating a license based on the usage rights…”; [0027] reciting “For example, a license, usage rights, or identification can be embedded in the card and communicated to the controller 302 and/or the rights assignment engine 310. LCD display 304, the smart card reader 306, keypad 308 and software interfaces constitute a user interface of creation server 300. The user interface permits a user to input information such as identification data”; [0047] reciting “Also, the invention allows a newspaper editor, for example, to send a camera crew to record content without worrying about the pictures being compromised in any way (for example, altered, edited, viewed by unauthorized personnel, or hidden and separately sold to another newspaper organization). In fact, the camera crew may have no rights whatsoever in the content as soon as the content is recorded.”; See claim 1 rejection in regards to “action candidate”), and receive the selected action selected from the action candidate including the basic action candidate and the extension action candidate (Raley; [0021] reciting “In step 120, a user request for use of, i.e. a license to, the content to be created is received.”; [0029] reciting “The instructions can cause the usage rights labels to be assigned in any manner and can include any permissions and/or restrictions. For example, in the case of a video recorder, each part of the video sequence or frames can selectively be assigned different rights. This makes the rights assignment process very flexible and dynamic and permits rights assignment to be made in real time as content is created or prior to creation.”; See claim 1 rejection in regards to “action candidate”). 16 Regarding claim 3, Yamamoto in view of Raley teaches the content generation system according to claim 1 (see claim 1 rejection above), wherein the at least one processor is further configured to execute the instructions to display the action candidate in an order determined according to the designated job (Raley; [0029] reciting “The instructions can cause the usage rights labels to be assigned in any manner and can include any permissions and/or restrictions. For example, in the case of a video recorder, each part of the video sequence or frames can selectively be assigned different rights. This makes the rights assignment process very flexible and dynamic and permits rights assignment to be made in real time as content is created or prior to creation.”; See claim 1 rejection in regards to “action candidate”). 17 Regarding claim 4, Yamamoto in view of Raley teaches the content generation system according to claim 1, wherein the at least one processor is further configured to execute the instructions to (see claim 1 rejection above): display the material candidate and a selected background image (Yamamoto; [0007] reciting “a step of obtaining a character string to be scrolled and displayed in a predetermined direction being overlaid on the motion picture image obtained in the motion picture image-obtaining step”; [0155] reciting “…a specifying command of any one of image data specified on the basis of predetermined operation performed by the user with the operation input unit 206 from among a plurality of pieces of image data in the screen of the page for motion picture playing displayed on the display unit 203 of the user terminal 2. After the motion picture-obtaining unit 306d reads and obtains, from the storage unit 305, image data of the background image P2 (see FIG. 6B) related to the specifying command (step S71), the motion picture-obtaining unit 306d registers the image data of the background image P2 as the background of the motion picture image (step S72).”); receive, as the selected material, a material candidate dragged on the background image (Yamamoto; [0037] reciting “In this case, the subject-included image means an image which is used as a foreground image during generation of a motion picture image Q explained later and which includes a main subject in a predetermined background.”); display the action candidate of the material candidate when the material candidate is dragged on the background image (Yamamoto; [0214] reciting “Further, in addition to the display mode of the character string W explained above, for example, when the tempo b of the BGM is faster or slower than a predetermined threshold value, the music-obtaining unit 306h may make performance of the display of the character string W by adding predetermined actions such as blinking the display, enlarging and reducing the display, and displaying with swinging while scrolling and displaying the character string W.”); and generate a moving image in which the selected material performs the selected action at a place to which the material candidate received as the selected material is dragged in the background image (Yamamoto; [0216] reciting “At the same time, the frame images constituting the motion picture image Q are successively generated by successively combining the foreground frame images and the background image P2. Further, the comment-attached frame images in which the character string W is arranged on the frame image at the predetermined position in accordance with the scroll display speed S of the character string W may be successively generated, and may be transmitted to the user terminal 2.”). 18 Regarding claim 5, Yamamoto in view of Raley teaches the content generation system according to claim 4 (see claims 1 and 4 rejections above), wherein the at least one processor is further configured to execute the instructions to display the action candidate of the selected material when a region of the selected material is designated, the selected material being displayed at the place to which the material candidate received as the selected material is dragged in the background image (Yamamoto; [0181] reciting “As described above, the data of the preview motion picture are configured to include the motion picture image Q including the plurality of frame images F1 to Fn made by combining the user desired background image P2 and the predetermined number of foreground frame images and the character string W scrolled and displayed in the predetermined scroll display direction Y in such a manner to be overlaid on the predetermined scroll region R set arbitrarily on the motion picture image Q.”; [0213-0214] reciting “For this reason, even when a plurality of songs are set and the tempo changes in the middle of the playing of the motion picture image Q, the display mode of the character string W can be controlled so that it is scrolled and displayed in association with the tempo of the music. Further, in addition to the display mode of the character string W explained above, for example, when the tempo b of the BGM is faster or slower than a predetermined threshold value, the music-obtaining unit 306h may make performance of the display of the character string W by adding predetermined actions such as blinking the display, enlarging and reducing the display, and displaying with swinging while scrolling and displaying the character string W.”). 19 Claims 7 and 13 has similar limitations as of claim 1, therefore it is rejected under the same rationale as claim 1. 20 Claims 8 and 14 has similar limitations as of claim 2, therefore it is rejected under the same rationale as claim 2. 21 Claims 9 and 15 has similar limitations as of claim 3, therefore it is rejected under the same rationale as claim 3. 22 Claims 10 and 16 has similar limitations as of claim 4, therefore it is rejected under the same rationale as claim 4. 23 Claim 11 and 17 has similar limitations as of claim 5, therefore it is rejected under the same rationale as claim 5. 24 Claim(s) 6, 12, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamamoto et al. (US 20130251341 A1) in view of Raley et al. (US 20090327137 A1) as of claim 1, further in view of Berkebile et al. (US 20210304509 A1) and Ogawa et al. (US 20220180904 A1). 25 Regarding claim 6, Yamamoto in view of Raley teaches the content generation system according to claim 1, wherein the material candidate includes a plurality of types of persons (Yamamoto; See Fig. 6A and 8B), the action candidate of the material candidate that is a person includes a motion, a change in expression (Yamamoto; [0098] reciting “In this case, the predetermined character string W memorized or stored in advance means, for example, greeting such as "good morning", "hello", and "good afternoon", sentences expressing feelings such as "great!" and "surprised!", frequently used texts such as texts used among predetermined friends, and symbols imitating the face of a person (so-called, emoticon).”), and the at least one processor is further configured to execute the instructions to receive a material candidate selected as a target of the selected action from in a case where the selected material selected from the plurality of types of persons is received and the selected action for an as the target is received (Yamamoto; [0039] reciting “The operation input unit 206 includes, for example, a mouse and a keyboard which is constituted by, e.g., data input keys for inputting numerical value, characters and the like, arrow keys for, e.g., selecting and moving data, and various kinds of function key, and outputs a press-down signal of a key pressed by a user and an operation signal of a mouse to the CPU of the central control unit 201.”; [0055] reciting “More specifically, the subject cropping unit 304 uses a publicly known subject cropping method to generate an image P1 obtained by cropping a region including the subject G from the subject-included image.”; [See Fig. 6A & 8B]). 26 Yamamoto in view of Raley does not explicitly teach the material candidate includes a plurality of types of persons, a plurality of types of vehicles, and a plurality of types of instruments, the action candidate of the material candidate that is a person includes a motion, a change in expression, and a voice, and the at least one processor is further configured to execute the instructions to receive a material candidate selected as a target of the selected action from the plurality of types of instruments as a second selected material in a case where the selected material selected from the plurality of types of persons is received and the selected action for an instrument as the target is received. 27 Berkebile teaches the material candidate includes a plurality of types of persons, a plurality of types of vehicles, and a plurality of types of instruments ([0219] reciting “For example, the graphic generator 1030 may control the screen of the image display device 2 to display a virtual object such that the virtual object appears to be in the environment as viewed by the user through the screen. By means of non-limiting examples, the virtual object may be a virtual moving object (e.g., a ball, a shuttle, a bullet, a missile, a fire, a heatwave, an energy wave), a weapon (e.g., a sword, an axe, a hammer, a knife, a bullet, etc.), any object that can be found in a room (e.g., a pencil, paper ball, cup, chair, etc.), any object that can be found outside a building (e.g., a rock, a tree branch, etc.), a vehicle (e.g., a car, a plane, a space shuttle, a rocket, a submarine, a helicopter, a motorcycle, a bike, a tractor, an all-terrain-vehicle, a snowmobile, etc.), etc. Also, in some embodiments, the graphic generator 1030 may generate an image of the virtual object for display on the screen such that the virtual object will appear to be interacting with the real physical object in the environment. For example, the graphic generator 1030 may cause the screen to display the image of the virtual object in moving configuration so that the virtual object appears to be moving through a space in the environment as viewed by the user through the screen of the image display device 2.”), the action candidate of the material candidate that is a person includes a motion, a change in expression, and a voice, and the at least one processor is further configured to execute the instructions to receive a material candidate selected as a target of the selected action from the plurality of types of instruments as a second selected material in a case where the selected material selected from the plurality of types of persons is received and the selected action for an instrument as the target is received (See above rejection for similar details; [0018] reciting “Optionally, the object identifier is configured to obtain an input indicating a selection of the object for which the identification of the object is to be determined.”). 28 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yamamoto in view of Raley) to incorporate the teachings of Berkebile to provide a method that can contain vehicles and instruments (like weapons for examples) for the images provided by the teachings of Yamamoto in view of Raley. Doing so would allow any display of a virtual object as stated by Berkbile ([0219] recited). 29 Yamamoto in view of Raley and Berkbile does not explicitly teach the material candidate includes a plurality of types of persons, a plurality of types of vehicles, and a plurality of types of instruments, the action candidate of the material candidate that is a person includes a motion, a change in expression, and a voice… 30 Ogawa teaches the material candidate includes a plurality of types of persons, a plurality of types of vehicles, and a plurality of types of instruments, the action candidate of the material candidate that is a person includes a motion, a change in expression, and a voice ([0039] reciting “The data storage unit 33 also stores information indicating who gives utterance corresponding to a voice included in the moving image. That is, the data storage unit 33 stores the voice included in the moving image and a person giving utterance corresponding to the voice in association with each other.”)… 31 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yamamoto in view of Raley and Berkbile) to incorporate the teachings of Ogawa to provide a method that contains voice related actions that can be used for the material candidates like the persons as taught by Yamamoto in view of Raley and Berkbile. Doing so would allow the extraction of voices of the persons that is or is not part of the moving image as stated by Ogawa ([0050] recited). 32 Claims 12 and 18 has similar limitations as of claim 6, therefore it is rejected under the same rationale as claim 6. Conclusion 33 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHNNY T LE/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 17, 2024
Application Filed
Jan 05, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
0%
With Interview (-66.7%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month