Prosecution Insights
Last updated: April 18, 2026
Application No. 18/742,277

System and Method for Collaborative Book Creation

Final Rejection §101§103§112
Filed
Jun 13, 2024
Examiner
SPAR, ILANA L
Art Unit
3622
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Scriptive LLC
OA Round
2 (Final)
45%
Grant Probability
Moderate
3-4
OA Rounds
3y 10m
To Grant
74%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
160 granted / 353 resolved
-6.7% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
32 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
24.0%
-16.0% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 353 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This is Final Office Action in response to the amendments filed on 11/13/2025. Claims 1-19 are cancelled. Claims 20-29 are new. Therefore, claims 20-29 are pending and addressed below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/11/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. However, Examiner wanted to note that Applicant listed KR10-2242055 on the IDS but provided KR20210016147A instead. It looks like KR10-2242055 is the publication of the patent and KR20210016147A is the publication of the original application. Examiner also wanted to note that Applicant provided the PCT International Search Report, but that search report is not listed on the IDS of 3/11/2025. A new IDS should be submitted that rectifies these problems. Claim Interpretation Examiner notes that claims 20-29 recite limitation that are not positively recited, intended results, and given little patentable weight. These limitations are as follows: “to identify, extract, and remove the entirety of text while preserving the illustrations as image data, or to identify, extract, and remove the entirety of the illustrations while preserving the text, thereby generating an initial specimen comprising either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed” of claim 20; “to create a modified specimen” of claim 20; “to enable subsequent modification of the modified specimen by providing at least a third user with access to the modified specimen for further integration of additional user-created text or illustrations, thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users” of claim 21; “to purchase the modified specimen” of claims 22 and 27; “that enables a plurality of users to evaluate and rank the modified specimen” of claims 23 and 28; “to create an initial specimen by segregating the entirety of the text from the illustrations in the electronic work using optical character recognition to identify and extract the text while preserving the illustrations, thereby generating either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed” of claim 25; “to create a modified specimen” of claim 25; “enables collaboration among the plurality of users by allowing multiple users to sequentially access and further modify the modified specimen stored in the database, thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users” in claim 26; Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 20-24, and 29 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As to claims 20-29, the specification does not describe the invention in sufficient detail to enable one of ordinary skill in the art to recognize that the inventor invented what is claimed. The test for sufficiency is whether the disclosure of the application relied upon reasonably conveys to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date. Examiner notes that Applicant did not point to any paragraphs of the disclosure for new claims 20-29 and did not provide a statement that no new matter was added. Claim 20 recites, “without manual content separation”. The specification does not explain or suggest this negative limitation. Par. 32 of the specification actually explains that preexisting works for subsequent modification can be done manually by a user. Par. 20 explains that “platform functions” such as “produce an initial specimen” can be done by a user (which may constitute any person). As such it appears that Applicant does not have written description as the effective filing date. Claim 21-24 are also rejected because of their dependencies on claim 20. Claim 29 recites, “wherein the networked computing platform executes a machine learning module that evaluates the modified specimen to (i) assess writing complexity or illustration completeness, and (ii) adaptively suggest modifications based on user proficiency profiles stored in the database.” The specification does not mention “a machine learning mode”, or “that the machine learning module evaluates the modified specimen to (i) assess writing complexity or illustration completeness” and (ii) adaptively suggest modifications based on user proficiency profiles stored in the database. The specification mentions using artificial intelligence but the artificial intelligence is not user to evaluate the modified specimen to assess writing complexity or illustration completeness or to suggest modifications based on user proficiency profiles. The specification does not mention any “user proficiency profiles” or any kind of profiles. Par. 55 of the specification states, “In another example, AI may be employed in the system and method of the present invention, for example to analyze a person's level of writing and, if improving or declining, reporting the same to a teacher. AI may also be employed in writing a story, such as to suggest additional description, words, etc.” This paragraph states that AI may be used to analyze a person’s level of writing and to suggests additional description, words, tec. but this is not connected to evaluating the modified specimen. As such it appears that Applicant does not have written description as the effective filing date. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 20-29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 20 recites, “wherein the electronic work comprises includes integrated text and illustrations”. It is unclear if the electronic work comprises integrated text an illustrations or if the electronic work includes integrated text and illustrations”. As such claim 20 is indefinite. For purposed of examination, this limitation will be interpreted as “wherein the electronic work includes integrated text and illustrations”. Examiner suggests amending as such. Claims 20 recites, “to identify, extract, and remove the entirety of text while preserving the illustrations as image data.” There is insufficient antecedent basis for the underlined limitation in the claim. Therefore, claim 20 is indefinite. For purposed of examination, this limitation will be interpreted as “to identify, extract, and remove the entirety of text while preserving the illustrations as image data”. Examiner suggests amending as such. Claim 25 recites, “by segregating the entirety of the text from the illustrations in the electronic work using optical character recognition.” There is insufficient antecedent basis for the underlined limitation in the claim. Therefore, claim 25 is indefinite. For purposed of examination, this limitation will be interpreted as “by segregating entirety of the text from the illustrations in the electronic work using optical character recognition”. Examiner suggests amending as such. Claims 21-24 and 26-29 are also rejected because of their dependencies on claims 20 or 25. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 20-29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Under step 1, claim 20 is directed to a system and claim 25 is directed to a method. Thus, claims 20 and 25 are directed to statutory categories of patentable subject matter. Step 2A, Prong 1: Claim 20 recite, “ A computer-implemented system for story creation over a communication network, comprising: a processor; a memory operatively coupled to the processor, the memory storing a database and instructions that, when executed by the processor, cause the system to: receive, via the communication network, an electronic work uploaded by a first user, wherein the electronic work comprises includes integrated text and illustrations; automatically segregate the text from the illustrations in the electronic work using optical character recognition to identify, extract, and remove the entirety of text while preserving the illustrations as image data, or to identify, extract, and remove the entirety of the illustrations while preserving the text, thereby generating an initial specimen comprising either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed, wherein the automatic segregation enables preparation of preexisting works for subsequent modification without manual content separation; store the initial specimen in the database; provide, via the communication network, access to the initial specimen in the database to a plurality of users through a user interface of the system; and receive, via the communication network, modifications to the initial specimen from at least a second user of the plurality of users and integrating the modifications into the initial specimen to create a modified specimen, wherein the modified specimen is generated by integrating newly created user-inputted electronic text into the first initial specimen to accompany the illustrations or integrating newly created user-inputted electronic illustrations into the second initial specimen to accompany the text; and store the modified specimen in the database. Independent claim 25 recites, “A method of collaboratively creating a story using a networked computing platform, the method comprising: providing the networked computing platform comprising a processor, a memory operatively coupled to the processor, and a database stored in the memory, wherein the networked computing platform is configured to be accessible to a plurality of users via a communication network; receiving, by the processor via the communication network, an electronic work uploaded by a first user, wherein the electronic work comprises an electronic file including both text and illustrations; automatically modifying, by the processor, the electronic work to create an initial specimen by segregating the entirety of the text from the illustrations in the electronic work using optical character recognition to identify and extract the text while preserving the illustrations, thereby generating either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed; storing, by the processor, the initial specimen in the database; providing, by the processor via the communication network, access to the initial specimen in the database to at least a second user of a plurality of users through a user interface of the networked computing platform; and receiving, by the processor via the communication network, modifications from the second user to create a modified specimen, wherein the modified specimen is generated by integrating user-inputted electronic text into the first initial specimen to accompany the illustrations, or integrating user-inputted electronic illustrations into the second initial specimen to accompany the text.” These limitations, except for the italicized portions, under their broadest reasonable interpretations, recite certain methods of organizing human activity. The claimed invention receives the work by a first user, segregates the text from the illustrations, stores the initial specimen, provides access to the initial specimen and receives modifications to the initial specimen from at least a second user and integrating the modifications into the initial specimen, which are all personal behaviors and interaction between people, and social activities. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination. Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of “a processor”, “a memory”, “a communication network”, “optical character recognition”, “a database”, “a user interface of the system”, that the work, text, and illustrations are “electronic.” These additional elements are generic computing elements performing generic computer functions such that it amounts to no more than mere instructions to apply the exception using a computer. Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application the additional elements of “a processor”, “a memory”, “a communication network”, “optical character recognition”, “a database”, “a user interface of the system”, that the work, text, and illustrations are “electronic.” are generic computing elements as supported by the specification. These additional elements are generic computing elements performing generic computer functions such that it amounts to no more than mere instructions to apply the exception using a computer. Therefore, the independent claims are not patent eligible. Dependent claims 21-24 and 26-29, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claims 20 and 25 without significantly more. The additional recited limitations of claims 21-24 and 26-29 are either further limiting the abstract idea of claims 20 or 25 or are additional elements that are generically recited. As such, when claims 20-29 are considered individually, as a whole, or in combinations, the claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 20-23 and 25-28 are rejected under 35 U.S.C. 103 as being unpatentable over Kosaka (US 2020/0388076), in view of Howell (US 2023/0038412). Regarding claim 20, Kosaka teaches 20. (Newly Added) A computer-implemented system for story creation over a communication network, comprising: a processor; a memory operatively coupled to the processor, the memory storing a database and instructions that, when executed by the processor, cause the system to (Fig. 9): receive, via the communication network, an electronic work uploaded by a first user ([0028] "First, in step 102, a source document may be detected, containing at least one text object and/or image object. A camera or scanner may be used to detect at least one page of printed matter, such as a poster or brochure, and create a digital version that becomes the source document. In alternate embodiments, the source document may be created by downloading, uploading, or otherwise accessing at least one screen of digitally displayed content, saved from a digital source, containing at least one text object and/or image object."), wherein the electronic work comprises includes integrated text and illustrations ([0049] "In this example, the source document on device 504 can be seen to comprise a source text object 506 and a source image object [illustrations] 508." [0078] A source document 1002 may be processed into the AR session engine 1000. The source document 1002 may first be analyzed by the object detection logic 1004 and the layout analysis engine 1006. The object detection logic 1004 may determine what objects are comprised by the source document 1002, and whether those objects are text objects or image objects. The layout analysis engine 1006 may analyze the source document 1002 to determine source document boundaries and source object boundaries."); automatically segregate the text from the illustrations in the electronic work using optical character recognition to identify, extract, and remove the entirety of text while preserving the illustrations as image data, or to identify, extract, and remove the entirety of the illustrations while preserving the text, thereby generating an initial specimen comprising either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed ([0079] "The text analysis engine 1008 may further perform optical character recognition (OCR) on the text objects detected by object detection logic 1004, using an OCR engine 1012. The OCR engine 1012 may analyze the pixels in the source document 1002 associated with text and recognize them as alphanumeric characters that compose words and sentences. The OCR engine 1012 may return the recognized words and sentences as digital text." [0080] "The image analysis engine 1010 may further perform image recognition algorithms within an image recognition engine 1014. These algorithms may use edge detection and object detection logic to identify key parameters that characterize the subject or subjects pictured in the image." [0038] Document creation (step 204) may begin with removing the source objects to be replaced by drawing over the areas they comprise with white (step 206).) Examiner notes that "to identify, extract, and remove the entirety of text while preserving the illustrations as image data, or to identify, extract, and remove the entirety of the illustrations while preserving the text, thereby generating an initial specimen comprising either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed" is intended use and given little patentable weight.) wherein the automatic segregation enables preparation of preexisting works for subsequent modification without manual content separation; store the initial specimen in the database ([0038] "Document creation (step 204) may begin with removing the source objects to be replaced by drawing over the areas they comprise with white (step 206). The selected AR objects may then be drawn onto the appropriate white area (step 208). The AR session may then display the new document (step 210). In some embodiments, the new document may be further modified, saved [store the initial specimen], or printed."); and store the modified specimen in the database ([0047] "The interactive AR system 400 may include local or cloud-based memory 412. The new document [modified specimen] 410 generated by the AR controller 408 based on a user input signal from the user interface 406 may be saved in digital file form to this memory [database] 412."). Kosaka discusses receiving modifications to the initial specimen from a user at a user interface in [0044] but not specifically provide, via the communication network, access to the initial specimen in the database to a plurality of users through a user interface of the system; and receive, via the communication network, modifications to the initial specimen from at least a second user of the plurality of users and integrating the modifications into the initial specimen to create a modified specimen, wherein the modified specimen is generated by integrating newly created user-inputted electronic text into the first initial specimen to accompany the illustrations or integrating newly created user-inputted electronic illustrations into the second initial specimen to accompany the text. However, Howell teaches provide, via the communication network, access to the initial specimen in the database to a plurality of users through a user interface of the system; and receive, via the communication network, modifications to the initial specimen from at least a second user of the plurality of users and integrating the modifications into the initial specimen to create a modified specimen ([0052] "Interface 300 can also enable a user to view multiple pages simultaneously, reorder the pages, and/or manage transitions between pages. In some instances, interface 300 is loaded from previous work performed by a different author. Thereby, a user can modify and build upon work performed by others." See also [0078] and [0079].), wherein the modified specimen is generated by integrating newly created user-inputted electronic text into the first initial specimen to accompany the illustrations or integrating newly created user-inputted electronic illustrations into the second initial specimen to accompany the text ([0076] "Obtaining pictures ( operation 618) can include accessing locally or remotely stored photographs. Obtaining pictures ( operation 618) can also include capturing one or more photographs with a camera device and saving one or more digital image [user-inputted electronic illustrations] files." [0072] "Typically, the digital story will include one or more pages, where each page has visual and/or textual aspects, along with one or more audio components."). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding provide, via the communication network, access to the initial specimen in the database to a plurality of users through a user interface of the system; and receive, via the communication network, modifications to the initial specimen from at least a second user of the plurality of users and integrating the modifications into the initial specimen to create a modified specimen, wherein the modified specimen is generated by integrating newly created user-inputted electronic text into the first initial specimen to accompany the illustrations or integrating newly created user-inputted electronic illustrations into the second initial specimen to accompany the text, as taught by Howell, since both Kosaka and Howell modify content, and so that users can modify content in a collaborative format (Howell, [0028]). Regarding claim 25, Kosaka teaches 25. (Newly Added) A method of collaboratively creating a story using a networked computing platform, the method comprising: providing the networked computing platform comprising a processor, a memory operatively coupled to the processor, and a database stored in the memory (Fig. 9), receiving, by the processor via the communication network, an electronic work uploaded by a first user, wherein the electronic work comprises an electronic file including both text and illustrations ([0028] "First, in step 102, a source document may be detected, containing at least one text object and/or image object. A camera or scanner may be used to detect at least one page of printed matter, such as a poster or brochure, and create a digital version that becomes the source document. In alternate embodiments, the source document may be created by downloading, uploading, or otherwise accessing at least one screen of digitally displayed content, saved from a digital source, containing at least one text object and/or image object."); automatically modifying, by the processor, the electronic work to create an initial specimen by segregating the entirety of the text from the illustrations in the electronic work using optical character recognition to identify and extract the text while preserving the illustrations, thereby generating either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed ([0079] "The text analysis engine 1008 may further perform optical character recognition (OCR) on the text objects detected by object detection logic 1004, using an OCR engine 1012. The OCR engine 1012 may analyze the pixels in the source document 1002 associated with text and recognize them as alphanumeric characters that compose words and sentences. The OCR engine 1012 may return the recognized words and sentences as digital text." [0080] "The image analysis engine 1010 may further perform image recognition algorithms within an image recognition engine 1014. These algorithms may use edge detection and object detection logic to identify key parameters that characterize the subject or subjects pictured in the image." [0038] Document creation (step 204) may begin with removing the source objects to be replaced by drawing over the areas they comprise with white (step 206).) Examiner notes that "to identify and extract the text while preserving the illustrations, thereby generating either: I) a first initial specimen comprising only the illustrations with the text removed, or 2) a second initial specimen comprising only the text with the illustrations removed" is intended use and given little patentable weight.); storing, by the processor, the initial specimen in the database ([0038] "Document creation (step 204) may begin with removing the source objects to be replaced by drawing over the areas they comprise with white (step 206). The selected AR objects may then be drawn onto the appropriate white area (step 208). The AR session may then display the new document (step 210). In some embodiments, the new document may be further modified, saved [store the initial specimen], or printed."). Kosaka discusses receiving modifications to the initial specimen from a user at a user interface in [0044] but not specifically wherein the networked computing platform is configured to be accessible to a plurality of users via a communication network; providing, by the processor via the communication network, access to the initial specimen in the database to at least a second user of a plurality of users through a user interface of the networked computing platform; and receiving, by the processor via the communication network, modifications from the second user to create a modified specimen, wherein the modified specimen is generated by integrating user-inputted electronic text into the first initial specimen to accompany the illustrations, or integrating user-inputted electronic illustrations into the second initial specimen to accompany the text. However, Howell teaches wherein the networked computing platform is configured to be accessible to a plurality of users via a communication network ([0028] "Digital story presentation in a viewer can include coordination of presentation of the textual, visual, audio and/or other aspects of each piece of content. Resulting digital stories can be shared and accessed by viewers different ( e.g., remote) from the user ( or users) who generated the digital story."); providing, by the processor via the communication network, access to the initial specimen in the database to at least a second user of a plurality of users through a user interface of the networked computing platform; and receiving, by the processor via the communication network, modifications from the second user to create a modified specimen ([0052] "Interface 300 can also enable a user to view multiple pages simultaneously, reorder the pages, and/or manage transitions between pages. In some instances, interface 300 is loaded from previous work performed by a different author. Thereby, a user can modify and build upon work performed by others." See also [0078] and [0079].., wherein the modified specimen is generated by integrating user-inputted electronic text into the first initial specimen to accompany the illustrations, or integrating user-inputted electronic illustrations into the second initial specimen to accompany the text ([0076] "Obtaining pictures ( operation 618) can include accessing locally or remotely stored photographs. Obtaining pictures ( operation 618) can also include capturing one or more photographs with a camera device and saving one or more digital image [user-inputted electronic illustrations] files." [0072] "Typically, the digital story will include one or more pages, where each page has visual and/or textual aspects, along with one or more audio components."). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding wherein the networked computing platform is configured to be accessible to a plurality of users via a communication network; providing, by the processor via the communication network, access to the initial specimen in the database to at least a second user of a plurality of users through a user interface of the networked computing platform; and receiving, by the processor via the communication network, modifications from the second user to create a modified specimen, wherein the modified specimen is generated by integrating user-inputted electronic text into the first initial specimen to accompany the illustrations, or integrating user-inputted electronic illustrations into the second initial specimen to accompany the text, as taught by Howell, since both Kosaka and Howell modify content, and so that users can modify content in a collaborative format (Howell, [0028]). Regarding claim 21, Kosaka discusses receiving modifications to the initial specimen from a user at a user interface in [0044] but not specifically wherein the instructions, when executed by the processor, further cause the system to enable subsequent modification of the modified specimen by providing at least a third user with access to the modified specimen for further integration of additional user-created text or illustrations, Thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users. However, Howell teaches wherein the instructions, when executed by the processor, further cause the system to enable subsequent modification of the modified specimen by providing at least a third user with access to the modified specimen for further integration of additional user-created text or illustrations, thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users ([0049] "Sharing module 216 enables digital story content generated and/or edited by the user to be shared with viewer V via network 110. In some instances, sharing module 216 enables collaboration across network 110, such that users in different locations can collaborate on the digital content." Examiner notes that “thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users” is intended use and given little patentable weight.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding wherein the instructions, when executed by the processor, further cause the system to enable subsequent modification of the modified specimen by providing at least a third user with access to the modified specimen for further integration of additional user-created text or illustrations, thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users, as taught by Howell, since both Kosaka and Howell modify content, and so that users can modify content in a collaborative format (Howell, [0028]). Regarding claim 22, Kosaka teaches 22. (Newly Added) The system of claim 20, wherein the processor and memory are accessible via a platform with which the user interface is connected (Fig. 4). Kosaka does not specifically teach wherein the platform provides a medium to purchase the modified specimen. However, Howell teaches wherein the platform provides a medium to purchase the modified specimen ([0209] "The digital story application can have a publisher review module. This will enable users to upload a digital story, enable collaboration of the digital story, create a final version of a digital story, and then upload the digital story to a user interface that can be accessed by publishers and editors. Publishers and editors can access the publisher and editor function with a subscription to locate new talent. A publisher can offer to purchase or license the content from the user via that publisher review module. A notification system can be implemented to inform a user when a publisher or editor has reviewed the digital story." See also [0189].). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding wherein the platform provides a medium to purchase the modified specimen, as taught by Howell, since both Kosaka and Howell are analogous art, and so that users can create additional versions based on reviews of the submitted collaborative content (Howell, [0133].) Regarding claim 23, Kosaka teaches 23. (Newly Added) The system of claim 20, wherein the processor and memory are accessible via a platform with which the user interface is connected (Fig. 4). Kosaka does not specifically teach wherein the platform includes an evaluation and ranking function that enables a plurality of users to evaluate and rank the modified specimen. However, Howell teaches wherein the platform includes an evaluation and ranking function that enables a plurality of users to evaluate and rank the modified specimen ([0133] "For example, if 500 users ranked collaborative content for a digital story and individuals from New York preferred collaborator number 115 and individuals from California preferred collaborator number IO for character 1, the user may opt to have two variations of the digital story and make one version available for East Coast viewers and make another version available for West Coast viewers."). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding wherein the platform includes an evaluation and ranking function that enables a plurality of users to evaluate and rank the modified specimen, as taught by Howell, since both Kosaka and Howell are analogous art, and so that digital asset buyers can buy a digital assets they are interested in (Howell, [0195].) Regarding claim 26, Kosaka does not specifically teach wherein the networked computing platform further enables collaboration among the plurality of users by allowing multiple users to sequentially access and further modify the modified specimen stored in the database, thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users. However, Howell teaches wherein the networked computing platform further enables collaboration among the plurality of users by allowing multiple users to sequentially access and further modify the modified specimen stored in the database, thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users ([0049] "Sharing module 216 enables digital story content generated and/or edited by the user to be shared with viewer V via network 110. In some instances, sharing module 216 enables collaboration across network 110, such that users in different locations can collaborate on the digital content."). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding wherein the networked computing platform further enables collaboration among the plurality of users by allowing multiple users to sequentially access and further modify the modified specimen stored in the database, thereby transforming the electronic work into a collaboratively created electronic story that integrates contributions from a plurality of users, as taught by Howell, since both Kosaka and Howell modify content, and so that users can modify content in a collaborative format (Howell, [0028]). Regarding claim 27, Kosaka does not specifically teach wherein the networked computing platform provides a medium to purchase the modified specimen. However, Howell teaches wherein the networked computing platform provides a medium to purchase the modified specimen ([0209] "The digital story application can have a publisher review module. This will enable users to upload a digital story, enable collaboration of the digital story, create a final version of a digital story, and then upload the digital story to a user interface that can be accessed by publishers and editors. Publishers and editors can access the publisher and editor function with a subscription to locate new talent. A publisher can offer to purchase or license the content from the user via that publisher review module. A notification system can be implemented to inform a user when a publisher or editor has reviewed the digital story.") Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding wherein the networked computing platform provides a medium to purchase the modified specimen, as taught by Howell, since both Kosaka and Howell are analogous art, and so that users can create additional versions based on reviews of the submitted collaborative content (Howell, [0133].) Regarding claim 28, Kosaka does not specifically teach wherein the networked computing platform includes an evaluation and ranking function that enables a plurality of users to evaluate and rank the modified specimen. However, Howell teaches wherein the networked computing platform includes an evaluation and ranking function that enables a plurality of users to evaluate and rank the modified specimen ([0133] "For example, if 500 users ranked collaborative content for a digital story and individuals from New York preferred collaborator number 115 and individuals from California preferred collaborator number IO for character 1, the user may opt to have two variations of the digital story and make one version available for East Coast viewers and make another version available for West Coast viewers."). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the interactive content of Kosaka with the digital story creation of Howell by adding wherein the networked computing platform includes an evaluation and ranking function that enables a plurality of users to evaluate and rank the modified specimen, as taught by Howell, since both Kosaka and Howell are analogous art, and so that digital asset buyers can buy a digital assets they are interested in (Howell, [0195].) Claims 24 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Kosaka (US 2020/0388076), in view of Howell (US 2023/0038412), in further view of Kumar (US 2023/0085466). Regarding claims 24 and 29, Kosaka modifies specimens as discusses above and Howell uses machine learning to make revisions or modifications to the specimen in [0170]. However, Kosaka and Howell do not specifically teach to execute a machine learning module that evaluates the modified specimen to (i) assess writing complexity or illustration completeness, and (ii) adaptively suggest modifications based on user proficiency profiles stored in the database. However, Kumar teaches to execute a machine learning module that evaluates the modified specimen to (i) assess writing complexity or illustration completeness ([0017] “The system then receives and processes the digital content by one or more machine learning models that are trained to determine/extract one or more attributes, objects, entities and/or characteristics of the digital content. For example, the one or more machine learning models-such as Adobe Sensei-can process the digital content to determine...text readability [writing complexity]."), and (ii) adaptively suggest modifications based on user proficiency profiles stored in the database ([0053] "The affinity profile includes a set of user affinities ( e.g., preferences, attractions, leanings, etc.) that are segmented into distinct categories such as, preferred colors, headline length, reading level [user proficiency profiles] ...sentiment, gender, background environment, activity, and other characteristics for digital content." See also [0048] and [0054].). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the systems of Kosaka and Howell by adding to execute a machine learning module that evaluates the modified specimen to (i) assess writing complexity or illustration completeness, and (ii) adaptively suggest modifications based on user proficiency profiles stored in the database, as taught by Howell, since both Kosaka, Howell, and Kumar are analogous art, and so that the content aligns with the target’s affinity profile (Kumar, [0054]). Response to Argument With regards to the 101, on p. 8-9, Applicant states, “In addition, the claims are not directed to an abstract idea under Step 2A. They recite a specific, technology-centric workflow involving improvements to computer technology and meaningful data transformation: automatically segregating text from illustrations in an uploaded mixed electronic work using OCR, generating and storing an initial specimen, and enabling networked, multi-user integrations of newly created content into that specimen. This not only functions to repurpose a preexisting electronic work, but also improves computer processing of composite electronic works by eliminating manual content separation and establishing a structured mechanism for subsequent modification. As in Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016), the present claims are directed to a specific improvement in computer functionality-namely, automated segregation of multimodal content using OCR to create structured, modifiable data objects. Such automatic OCR segregation produces a technological transformation of data format and structure, not just a display or abstract manipulation.” Examiner respectfully disagrees. It is unclear how the claims “involv[e] improvements to computer technology”. It is unclear how the claims produce a technological transformation of the data format and structure. Therefore, Examiner is not persuaded. On p. 9, Applicant states, “The cooperation of OCR-based segregation, machine-generated specimen creation and storage, and collaborative, network-based integrations amounts to more than merely invoking a computer; it effects a concrete data transformation tied to specific computing components (processor, memory, database).” Examiner respectfully disagrees. Except for generic computing elements, all of the functions of the claims is abstract and could be done using a pen and paper. See the updated 101 rejection above. Therefore, Examiner is not persuaded. On p. 10 and 11, Applicant argues that Campagna and Yamazaki do not teach the new claims. These arguments have been considered but they are moot since claims 1-19 have been cancelled and those references are not used to reject new claims 20-29. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIE P. BRADY whose telephone number is (571)272-4855. The examiner can normally be reached Tues-Thurs 8:00 - 2:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ilana Spar can be reached at (571)270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIE P BRADY/Primary Examiner, Art Unit 3622
Read full office action

Prosecution Timeline

Jun 13, 2024
Application Filed
Aug 12, 2025
Non-Final Rejection — §101, §103, §112
Oct 10, 2025
Interview Requested
Oct 28, 2025
Examiner Interview Summary
Oct 28, 2025
Applicant Interview (Telephonic)
Nov 13, 2025
Response Filed
Jan 13, 2026
Final Rejection — §101, §103, §112
Mar 26, 2026
Request for Continued Examination
Apr 08, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 9234927
MEASURING INSTRUMENT AND MEASURING METHOD FEATURING DYNAMIC CHANNEL ALLOCATION
2y 5m to grant Granted Jan 12, 2016
Patent 9236006
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Jan 12, 2016
Patent 9214112
DISPLAY DEVICE AND DISPLAY METHOD
2y 5m to grant Granted Dec 15, 2015
Patent 9208708
ELECTRO-OPTICAL DEVICE AND ELECTRONIC APPARATUS
2y 5m to grant Granted Dec 08, 2015
Patent 9201529
Touch Sensing Method and Portable Electronic Apparatus
2y 5m to grant Granted Dec 01, 2015
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
45%
Grant Probability
74%
With Interview (+28.2%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 353 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month