Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This action is responsive to application communication filed on 3/22/2024.
2. Claims 1-20 are pending in the case.
3. Claims 1, 10 and 17 are independent claims.
Claim Objections
Claim 17 is objected to because of the following informalities:
Claim 17 should be amended to “a .
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 10-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 10-16 recite te phrase “tangible computer readable medium”, which is not explicitly defined in the specification. The plain meaning of ”tangible” includes
a. capable of being perceived especially by the sense of touch
b. capable of being precisely identified or realized by the mind
c. capable of being appraised at an actual or approximate value.
The United States Patent and Trademark Office (USPTO) is obliged to give claims their broadest reasonable interpretation consistent with the specification during proceedings before the USPTO. See In re ZIetz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter).
A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation “non-transitory” to the claim. Cf Animals - Patentability, 1077 Off. Gaz. Pat. Office 24 (April 21, 1987) (suggesting that applicants add the limitation “non-human” to a claim covering a multi-cellular organism to avoid a rejection under 35 U.S.C. § 101). Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See, e.g., Gentry Gallery, Inc. v. Berkline Corp., 134F.3d 1473 (Fed. Cir. 1998).
Therefore, under BRI, the recited phrase is merely a signal and is not a process, a machine, a manufacture or a composition of matter.
Accordingly, the claim fails to recite statutory subject matter as defined in 35 U.S.C. § 101.
To overcome the 35 U.S.C. 101 rejection, Examiner suggests to amend claim to recite “non-transitory
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Knipp et al. (hereinafter “Knipp”), U.S. Published Application No. 20160225187 A1.
Claim 1:
Knipp teaches A method of creating an electronic story comprising: (e.g., creation of story content par. 53; User interface component 185 generally facilitates creation of story content by a developer and may be embodied as a graphical user interface (GUI) and corresponding computer application. Par. 43; In one embodiment, a user's motions or gestures may be used for providing feedback during storytelling and modifying the story.)
receiving a user prompt used at a story generation server; (receiving user information at a storytelling platform (i.e., story generation server) par. 30; system 100 includes a network 105 communicatively coupled to a storytelling platform 110, a user interface 155, a presentation component 130, a presentation environment sensor(s) 145, a storage 120, and a content authoring and production component 180. par. 41; Sensor(s) 145 provide information from the presentation environment to storytelling platform 110 for facilitating storytelling. For example, such information might include spoken information from a user, such as words spoken par. 44; user interface 155 presents aspects of a story guide for assisting a user to tell a story including querying the user for story elements par. 103; User interface 501 also shows examples of prompts provided to a user-storyteller by an embodiment of story guide 150. )
submitting the user prompt to artificial intelligence system trained using previously written stories; (e.g., to guide user story, the method includes submitting user prompt to AI engine trained using history information such as favorite characters or plotlines from previous presented stories stored par. 27; User history information, such as a child's favorite characters, plotlines, and settings, or story elements that have not been used recently (e.g., something new and untried) may be used by an embodiment to provide an enhanced story experience. Par. 93; Some embodiments of story guide 150 use an AI engine of operating system 112 and knowledge representation component 115 in conjunction with one or more sensor(s) 145 to first determine a level of assistance needed by a storyteller. Par. 109; Embodiments of assembler 162 can assemble a story based on user-provided information (which may be obtained via story guide 150), environmental and contextual information, available story resources and story logic (including logic associated with user information 129 such as favorite story elements, elements to avoid, frequency of story elements in previously presented stories, bedtime, or other user preferences, user settings, or user history information).)
using the artificial intelligence system, generating text for an electronic story file; (e.g., querying the user for story elements and using AI system to generate narrative text or suggest story paths par. 25; For example, in one embodiment, text-matching is employed to recognize a specific known story text or scene and provide corresponding imagery and sounds. Par. 44; user interface 155 presents aspects of a story guide for assisting a user to tell a story including querying the user for story elements or suggesting story elements (e.g., characters, plot themes, etc.) and generating narrative text for a user (such as a parent) to read to a child, while corresponding images and sounds are presented via presentation component(s) 130 par. 101; The software agent may be part of an artificial intelligence component of operating system 112, as described previously. In one embodiment, the agent comprises a virtual assistant that may be summoned by the storyteller at any time to suggest story paths, characters, challenges, solutions, or provide other guidance. par. 104; In particular, user interface 502 provides a suggested narrative 525 to the storyteller for beginning a story. Par. 106; Embodiments of storytelling engine 160, operating in conjunction with other aspects of system 100, may assemble, evaluate, and/or modify a story based on user-provided information, environmental and contextual information, and story resources and story logic par. 114; However, it is contemplated that in some embodiments, aspects of storytelling platform 110 (which may use an AI engine of operating system 112, or logic rules) can learn storytelling tendencies of the user. )
communicating the text to a scene generation system; (e.g., communicating selected story elements to story block generator (i.e., scene generation system) par. 55; Story block generator 186 generally facilitates creating story blocks or story threads. Examples of story blocks or story threads are described in connection to FIGS. 6A-6C. At a high level, a story block includes a module of a story, such as a scene or scene-portion, with character(s), setting, sounds and images (which may be dependent on the character(s), rather than the particular story block), plot or character interactions including dialog, etc. par. 55; Storytelling engine 160 may utilize user-provided information obtained via story guide 150 (of FIG. 1B) or other information for determining which particular character(s), setting(s), or story elements to use, as well as which blocks to use and an order of the block sequence.)
in the scene generation system, analyzing the text for scene information; ( e.g., analyzing narrative text or story manuscript text for corresponding images and sounds (i.e., scene information) par. 44; generating narrative text for a user (such as a parent) to read to a child, while corresponding images and sounds are presented via presentation component(s) 130 par. 55; For example, where the user's favorite character is a butterfly, a butterfly may be used as the character, with a corresponding setting determined to be a field of flowers. Similarly, a butterfly may be used wherein the user selects a butterfly in response to a query presented over user interface 155 near the beginning of the storytelling experience. par. 73; GUI 800 further includes a story manuscript window 821 depicting a manuscript or outline, which may also include text to be presented to a user (such as a parent) via story guide 150, either as a prompt to facilitate storytelling or as text to be read by the user. In this example, a story manuscript is shown with markers added to indicate when various story elements (e.g., video and audio media files in this example) should be played par. 139; In one embodiment, the suggestions, prompts, queries, or narrations include story elements identified from the information received in step 713. (For example, where the storyteller is telling a story about a penguin, a narration provided as guidance information may include a story scene involving the penguin.))
determining scene information; (e.g., determining scene information based on information from user input or metadata of story blocks par. 28; For example, based on information received from a child or parent, different characters, subplots, or scenes may be introduced to the story. par. 55; For example, where the user's favorite character is a butterfly, a butterfly may be used as the character, with a corresponding setting determined to be a field of flowers. Similarly, a butterfly may be used wherein the user selects a butterfly in response to a query presented over user interface 155 near the beginning of the storytelling experience. Par. 129; At step 731, a sequence of one or more story blocks is determined based on the metadata of the blocks and contextual information. In one embodiment, the sequence of blocks will determine the flow of the story (e.g., scenes, interactions, dialogs, etc.), such as described in connection to FIGS. 6A-6C. )
communicating scene information to graphics processor;
in the graphics processor, generating graphics for the story based on the scene information. (e.g., communicating scene information to a component and generating graphical content from libraries to present the scene par. 56; libraries referenced or called by the block, par. 74; For example, in one embodiment, if at or near storytelling time story logic determines that a child-user's bedtime is soon, a shorter version of a particular story may be assembled (by storytelling engine 160) and presented, which may leave out extended sound effects, video or motions, or non-essential story blocks. Similarly, where it is determined that the user desires to fill a longer time period (for example, on a rainy Saturday afternoon), then a longer version of the particular story can be assembled and presented, which includes extended content (e.g., scenes, sound effects, videos, other story threads or blocks, etc.) par. 117; In some embodiments, storytelling engine 160 looks ahead at potential future branches, probable story block sequences, scenes, or use of other story elements in order to make suitable decisions about story assembly, evaluation, and modification. Par. 158; based at least on a portion of the metadata and contextual information, determining a sequence of one or more story blocks from the set of story blocks; and determining, using the one or more corresponding story element libraries, a first story element for a first placeholder in a first story block, thereby populating the first story block with the first story element.)
Claim 2 depends on claim 1:
Knipp teaches wherein the graphics comprise images, animated images or videos. (e.g., story elements such as visual images, animations and videos par. 60; It also illustrates how a dynamic story can be instantiated from a structure 601 by storytelling engine 160, wherein the block templates or placeholders are filled with specific story content for the story being presented. By way of example and not limitation, placeholders may be used for story elements including not only characters, settings, sound effects, visual images, animations, videos,)
Claim 3 depends on claim 1:
Knipp teaches further comprising combining the graphics with text related to the graphics to create a story file. (e.g., combining the graphics of a penguin with the penguin narrative to create a downloadable story file par. 50; user accounts or account information, which may be used by embodiments providing content through a subscription model or downloadable story packages or expansion sets, or may facilitate users sharing their stories or story content with other users on other Narratarium systems. Par. 71; the coded story includes references to and/or linkages between each of the individual story elements and/or other media files. Par. 73; In this example, a story manuscript is shown with markers added to indicate when various story elements (e.g., video and audio media files in this example) should be played. par. 139; In one embodiment, the suggestions, prompts, queries, or narrations include story elements identified from the information received in step 713. (For example, where the storyteller is telling a story about a penguin, a narration provided as guidance information may include a story scene involving the penguin.))
Claim 4 depends on claim 1:
Knipp teaches wherein the graphics are created for a virtual reality viewer or an augmented reality viewer. ( providing story related visual information (i.e., visual graphics) for virtual-reality goggle (i.e., virtual reality viewer) par. 3; For example, one embodiment of the Narratarium immerses a user (or users, audience, or the like) into the story by projecting visual and/or audio story elements into the space surrounding the audience. Par. 24; Similarly, as a parent tells a story to a child (including a parent, grandparent, or other person(s) telling the story from a remote location), the room is filled with images, colors, sounds, and presence, based on the story par. 40; It is contemplated that embodiments of the invention may use any type of presentation component for providing visual and/or audio story information to a user, including smart-glasses, virtual-reality goggles, television screens or display monitors, and screens from user devices (e.g., tablets, mobile phones, or the like. par. 155; The output of the accelerometers or gyroscopes may be provided to the display of the computing device 900 to render immersive augmented reality or virtual reality.)
Claim 5 depends on claim 1:
Knipp teaches where the story file is communicated to a user computing device. (e.g., user accounts allows for sharing of created stories to other user devices or downloading created stories via computer device par. 50; user accounts or account information, which may be used by embodiments providing content through a subscription model or downloadable story packages or expansion sets, or may facilitate users sharing their stories or story content with other users on other Narratarium systems. Par. 155; The output of the accelerometers or gyroscopes may be provided to the display of the computing device 900 to render immersive augmented reality or virtual reality. )
Claim 6 depends on claim 1:
Knipp teaches wherein the story file is communicated to a web portal. (e.g., created stories are communicated to a user account of a software application accessible via network (i.e., web portal) par. 52; In one embodiment, the production component 180 comprises a software application tool, and associated hardware and software components, for use by content producers, such as publishers, developers, or in some cases a Narratarium user par. 50; user accounts or account information, which may be used by embodiments providing content through a subscription model or downloadable story packages or expansion sets, or may facilitate users sharing their stories or story content with other users on other Narratarium systems. Par. 70; In some embodiments, production component 180, or an aspect thereof, may be embodied as a stand-alone application, a suite of computer programs par. 151; Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. )
Claim 7 depends on claim 4:
Knipp teaches wherein the user computing device is a virtual and/or augmented reality viewing device. (e.g., virtual-reality goggles par. 40; It is contemplated that embodiments of the invention may use any type of presentation component for providing visual and/or audio story information to a user, including smart-glasses, virtual-reality goggles, television screens or display monitors, and screens from user devices (e.g., tablets, mobile phones, or the like. par. 155; The output of the accelerometers or gyroscopes may be provided to the display of the computing device 900 to render immersive augmented reality or virtual reality. )
Claim 8 depends on claim 7:
Knipp teaches wherein the file is communicated using a known protocol. (e.g., wirelessly or wired sharing of created stories using known protocol in the art par. 50; user accounts or account information, which may be used by embodiments providing content through a subscription model or downloadable story packages or expansion sets, or may facilitate users sharing their stories or story content with other users on other Narratarium systems. Par. 75; In some embodiments, production component 180 might also include functionality for formatting various media elements and story resources to best fit or operate with presentation component(s) 130 Par. 153; By way of example, and not limitation, communication media includes wired media, such as a wired network or direct-wired connection, and wireless media, such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. )
Claim 9 depends on claim 7:
Knipp teaches wherein the electronic story file is accessed using an API. (e.g., functionality for formatting various media elements and story resources (i.e., API functionality ) par. 75; In some embodiments, production component 180 might also include functionality for formatting various media elements and story resources to best fit or operate with presentation component(s) 130)
Independent Claim 10:
Claim 10 is substantially encompassed in claim 1, therefore, Examiner relies on the same rationale set forth in claim 1 to reject claim 10.
Claim 11 depends on claim 10:
Claim 11 is substantially encompassed in claim 2, therefore, Examiner relies on the same rationale set forth in claim 2 to reject claim 11.
Claim 12 depends on claim 10:
Claim 12 is substantially encompassed in claim 3, therefore, Examiner relies on the same rationale set forth in claim 3 to reject claim 12.
Claim 13 depends on claim 10:
Claim 13 is substantially encompassed in claim 5, therefore, Examiner relies on the same rationale set forth in claim 5 to reject claim 13.
Claim 14 depends on claim 10:
Claim 14 is substantially encompassed in claim 6, therefore, Examiner relies on the same rationale set forth in claim 6 to reject claim 14.
Claim 15 depends on claim 14:
Claim 15 is substantially encompassed in claim 8, therefore, Examiner relies on the same rationale set forth in claim 8 to reject claim 15.
Claim 16 depends on claim 14:
Claim 16 is substantially encompassed in claim 9, therefore, Examiner relies on the same rationale set forth in claim 9 to reject claim 16.
Independent Claim 17:
Claim 17 is substantially encompassed in claim 1, therefore, Examiner relies on the same rationale set forth in claim 1 to reject claim 17.
Claim 18 depends on claim 17:
Claim 18 is substantially encompassed in claim 2, therefore, Examiner relies on the same rationale set forth in claim 2 to reject claim 18.
Claim 19 depends on claim 17:
Claim 19 is substantially encompassed in claim 3, therefore, Examiner relies on the same rationale set forth in claim 3 to reject claim 19.
Claim 20 depends on claim 17:
Claim 20 is substantially encompassed in claims 6, 8 and 9, therefore, Examiner relies on the same rationale set forth in claims 6, 8 and 9 to reject claim 20.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Fuller; Andrew et al. US 20130165225 A1
See abstract; A user may interact with the linear story via a NUI system to alter the story and the images that are presented to the user. In an example, a user may alter the story by performing a predefined exploration gesture. This gesture brings the user into the 3-D world of the displayed image. In particular, the image displayed on the screen changes to create the impression that a user is stepping into the 3-D virtual world to allow a user to examine virtual objects from different perspectives or to peer around virtual objects.
McCarty; Michael et al. US 20210064650 A1
Par. 18; Story synthesis as used herein refers to the digital creation of sequenced information that tells a story based on a request. In particular, a request for story synthesis is processed by the automated computing system creatively retrieving or generating information, described herein in units of content items, and then arranging and assembling that information to present a story.
Pair; Jackson US 20210383800 A1 teaches API and Web portal features
Par. 85; In various embodiments, this cloud networking architecture is an open architecture that leverages application programming interfaces (APIs);
Par. 90; The cloud computing environments 375 can interface with the virtualized network function cloud 325 via APIs that expose functional capabilities of the VNEs 330, 332, 334, etc., to provide the flexible and expanded capabilities to the virtualized network function cloud 325.
Par. 93; The functionality may be provided to user devices as a resident program, as a client portion of a client-server architecture, as a portal to a service, e.g., as a Web app, and so on.
Par. 96; Searchable records may include commercial libraries that provide an API, or internal proprietary libraries.
Par. 155; Such machine-readable code and/or commands may be provided as direct API and/or system calls to a content creation system.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY ORR whose telephone number is (571)270-1308. The examiner can normally be reached 9AM-5PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571)272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HENRY ORR/Primary Examiner, Art Unit 2172