Detailed Action
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the communications filed 5/01/2025. As per the claims filed 11/15/2023:
Claims 1-20 were cancelled.
Claims 21-40 were added.
Claims 21-40 are pending.
Claim(s) 21, 31 is/are independent claim(s).
Note Regarding Prior Art
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Note Regarding AIA Status
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 21-23, 31-33 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mohamed R. Amer et al (US PG Pub No: 2019/0304157l; Filed: 12/21/2018)(hereinafter: Amer).
Claim 21:
As per independent claim 21, Amer discloses a method, comprising:
receiving a request to create a story synthesis that simulates an answer to a query [[0034] The example of FIG. 1A illustrates system 100A including computing system 110 and animation generator 161. Computing system 110 includes language parser 107, user interaction module 108, and machine learning module 109. Language parser 107 accepts textual description 102 and generates parsed text. Interaction module 108 accepts input 104 from user interface device 170]. User input serves as a query.
inputting the received request to create the story synthesis into a machine learning model to generate an output [[0034] Once trained, machine learning module 109 may receive data from language parser 107 and interaction module 108 and output data structure 151. Animation generator 161 converts data structure 151 into animation 180.]
determining a stage of processing of the machine learning model, from a plurality of stages of processing, at a time when the request to create the story synthesis was inputted [[0035] Machine learning module 109 may represent a system that uses machine learning techniques (e.g., neural networks, deep learning, generative adversarial networks (GANs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), or other artificial intelligence and/or machine learning techniques) to generate data structure 151 based on data of a type corresponding to textual description 102 and/or input 104. Machine learning module 109 may apply first-order logic to identify objects, attributes of objects, and spatio-temporal relationships among objects in the textual description 102]. Machine learning model has a number of processing stages.
generating an output from the machine learning model that is based on the determined stage of processing of the machine learning model [[0036] machine learning module 109 is trained using supervised machine learning techniques that involve providing training data 101 to machine learning module 109 (including both the training sets and corresponding appropriate output data structures), and determining a set of parameters for generating, from new data, appropriate data to be included within a data structure. (In some examples, the data structure is generated in advance, but the components of data that fill the data structure are predicted.)]
creating a structure for the story synthesis based on the generated output from the machine learning model [[0036] data structure 151 corresponds to a representation of the scene or event sequences that includes each relevant spatio-temporal and object-attribute relationship. Data structure 151 may serve as an intermediate representation between text, natural language, and non-verbal (e.g. gesture information) and the visual domain, and may include sufficient information to enable an animation or visual representation of the scene to be generated from data structure 151.]
obtaining story components to populate the created story structure [[0177] Computing system 310 may generate a data structure (802). For instance, input processing module 321 outputs information about the story to graph module 322. Graph module 322 generates, based on the information about the sequence of events, a spatio-temporal composition graph 328 specifying attributes associated with the objects and further specifying relationships between at least some of the plurality of objects. Claim 11, determining a response to the query by analyzing, by the computing system, information sufficient to create an animation based on the spatio-temporal composition graph; and outputting, by the computing system, the response to satisfy the query.] and
stitching together the story components based on the created structure to output the answer to the query [claim 11, determining a response to the query by analyzing, by the computing system, information sufficient to create an animation based on the spatio-temporal composition graph; and outputting, by the computing system, the response to satisfy the query, [0034] Once trained, machine learning module 109 may receive data from language parser 107 and interaction module 108 and output data structure 151. Animation generator 161 converts data structure 151 into animation 180.].
Claim 22:
As per claim 22, which depends on claim 21, Amer discloses wherein the stage of processing at the time when the request to create the story synthesis was inputted into the machine learning model is a stage of using specified content items to select a story template [[0036] Once trained, machine learning module 109 uses the parameters identified as a result of the training process to generate new data structures from new input data. For instance, in the example of FIG. 1A, machine learning module 109 uses parameters derived from the training process to generate data structure 151 from textual descriptions 102 and input 104. In some examples, data structure 151 corresponds to a representation of the scene or event sequences that includes each relevant spatio-temporal and object-attribute relationship.] and
based on the stage of processing, generating the output from the machine learning model that is a selection of a story template [[0036] Once trained, machine learning module 109 uses the parameters identified as a result of the training process to generate new data structures from new input data. For instance, in the example of FIG. 1A, machine learning module 109 uses parameters derived from the training process to generate data structure 151 from textual descriptions 102 and input 104. In some examples, data structure 151 corresponds to a representation of the scene or event sequences that includes each relevant spatio-temporal and object-attribute relationship.]. The output of the machine learning model is the data structure.
Claim 23:
As per claim 23, which depends on claim 22, Amer discloses further comprising: receiving a selection of the story template [[0036] data structure (template) is selected by the machine learning module] and using the machine learning model to determine which types of other content items to use based on selection of the story template [[0041] Animation generator 161 determines, based on data structure 151, attributes of an animation that corresponds to data structure 151.].
Claim 31:
As per independent claim 31, it recites system comprising a processor configured to perform the method of claim 21, therefore it is rejected under the same rationale as claim 21 above.
Claim 32:
As per claim 32 it is rejected under the same rationale as claim 22 above.
Claim 33:
As per claim 33 it is rejected under the same rationale as claim 23 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 24, 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Chan V. Hou, (US PG Pub No. 2019/0349619; Filed: 11/14/2019)(hereinafter: Hou).
Claim 24:
As per claim 24, which depends on claim 23, Amer discloses retrieving content items but failed to specifically disclose further comprising: determining that one or more types of other content items to use based on the selection of the story template is not available; and
in response to determining that the one or more types of other content items to use based on the selection of the story template is not available, crawling one or more content sources to obtain the one or more types of other content items.
Hou, in the same field of content retrieval discloses these limitations in that [[0286] If the process determines that the content item is no longer available, at block 708 the process may identify a substitute content item from a pool of content items or from one or more other sources (optionally including a pool of content items stored and maintained by the content scheduling process) having one or more specified similar properties (e.g., subject, length, source, creator, posting date, popularity, etc.). By way of example, a pool of substitute content items (or other content) may have previously been manually or automatically identified for the specific program (e.g., a dedicated program pool)… optionally, in addition to or instead of using a dedicated program pool of substitute backup content items, the process may search for and select substitute content items from a broader pool (e.g., a pool specific to a given channel) or from third party content hosting sites.]
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the content retrieval teachings of Amer to determine that one or more types of other content items to use based on the selection of the story template is not available; and in response to determining that the one or more types of other content items to use based on the selection of the story template is not available, crawling one or more content sources to obtain the one or more types of other content items as disclosed by Hou. The motivation for doing so would have been to quickly and reliably substitute unavailable content in order to generate content with no interruptions, resulting in more efficient results.
Claim 34:
As per claim 34 it is rejected under the same rationale as claim 24 above.
Claim(s) 26-28, 36-38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Michele Francis (US PG Pub No. 2019/0302993; Published: 10/3/2019; Priority: 03/29/2019)(hereinafter: Francis).
Claim 26:
As per claim 26, which depends on claim 21, Amer discloses [[0044] Machine learning module 109 may update data structure 151 to incorporate this additional information. Machine learning module 109 may also update its parameters to learn from the revisions or refinements proposed by the user. Such an update or refinement capability may be represented by feedback input 157].
Amer failed to specifically disclose comprising: generating a verification prompt, wherein the verification prompt includes information relating to some or all portions of the stitched story components and seeks user confirmation; and presenting the verification prompt to a user from whom the request was received.
Francis, in the same field of machine learning assited story creation discloses these limitations in that [[0012] the method further includes presenting the user with at least one request for media or approving media corresponding to the at least two structured questions. [0034] a set of digital media items may include business or personal related images, videos, or other media provided by or within the control/access of the user combined or intercut with a set of produced images and/or video clips, product images and/or video clips, business images and/or video clips, stock images and/or video clips, graphic images, icons, or other media determined by a narrative arc template and chosen or approved by a user.]. User must be prompted to approve components.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify feedback mechanism of Amer to generate a verification prompt, wherein the verification prompt includes information relating to some or all portions of the stitched story components and seeks user confirmation; and present the verification prompt to a user from whom the request was received as disclosed by Francis. The motivation for doing so would have been to significantly reduce workflow by having users mainly focused on approving the finished work, or suggesting changes to what was programmatically assembled based on existing user data and machine learning models (0052).
Claim 27:
As per claim 27, which depends on claim 26, it is rejected under the same rationale as claim 26 above. Additionally, Amer and Francis disclose receiving user confirmation in response to the presented verification prompt; and associating the user confirmation with some or all portions of the stitched story components presented in the verification prompt as being relevant to the received request. Francis [[0012] the method further includes presenting the user with at least one request for media or approving media corresponding to the at least two structured questions. [0034] a set of digital media items may include business or personal related images, videos, or other media provided by or within the control/access of the user combined or intercut with a set of produced images and/or video clips, product images and/or video clips, business images and/or video clips, stock images and/or video clips, graphic images, icons, or other media determined by a narrative arc template and chosen or approved by a user.]. User must be prompted to approve components.
Claim 28:
As per claim 28, which depends on claim 26, it is rejected under the same rationale as claim 26 above. Additionally, Amer and Francis disclose further comprising: determining based on a response to the verification prompt that includes information relating to some or all portions of the stitched story components does not match the received request; and in response to determining that the information does not match the received request, transmitting a signal to an electronic device of the user to cause the electronic device to prompt for further input Francis [[0012] the method further includes presenting the user with at least one request for media or approving media corresponding to the at least two structured questions. [0034] a set of digital media items may include business or personal related images, videos, or other media provided by or within the control/access of the user combined or intercut with a set of produced images and/or video clips, product images and/or video clips, business images and/or video clips, stock images and/or video clips, graphic images, icons, or other media determined by a narrative arc template and chosen or approved by a user [0054] In some embodiments, some or all of the assembly may be accomplished programmatically with the user simply approving the output or selecting changes.]. User must be prompted to approve components.
Claim 36:
As per claim 36 it is rejected under the same rationale as claim 26 above.
Claim 37:
As per claim 37 it is rejected under the same rationale as claim 27 above.
Claim 38:
As per claim 38 it is rejected under the same rationale as claim 28 above.
Claim(s) 29, 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amer in view of Dipock Das et al (US PG Pub No. 2019/0034498; Published: 01/31/2019)(hereinafter: Das).
Claim 29:
As per claim 29, which depends on claim 21, Amer discloses the query including user information but failed to specifically disclose further comprising: determining whether the received request includes a query that is specific to a geographic location; and in response to determining that the received request includes a query that is specific to a geographic location, relating story synthesis to the geographic location.
Das, in the same field of machine learning models customizing data for a user discloses this limitation in that [[0246] the disambiguation model 991 may associate the NL request 915 and any number of additional parameters with the disambiguation recommendation 1240. For instance, in some embodiments, the disambiguation model 991 may associate the NL request 915 and a geographical location of the user with the disambiguation recommendation 1240.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify query of Amer to determine whether the received request includes a query that is specific to a geographic location; and in response to determining that the received request includes a query that is specific to a geographic location, relating story synthesis to the geographic location as disclosed by Das. The motivation for doing so would have been to customize the story to the user geographic position thus resulting in more accurate/relevant content being generated.
Claim 39:
As per claim 39 it is rejected under the same rationale as claim 29 above.
Allowable Subject Matter
Claims 25, 30, 35, 40 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
The prior art of record failed to disclose or suggest alone or in combination the limitations of claims 25, 30, 35, 40.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HOWARD CORTES whose telephone number is (571)270-1383. The examiner can normally be reached on M-F, 8:00 am - 5:00 pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott T Baderman can be reached on (571)272-3644. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HOWARD CORTES/ Primary Examiner, Art Unit 2118