DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to papers filed on 12/23/2025.
Claims 1, 3, 4, 6, 10, 12, 13, 15, 18, and 20 have been amended.
No claims have been cancelled.
No claims have been added.
Claims 1-20 are pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
The claims are directed to a process (method as introduced in Claim 18), and/or system (Claim 10), and/or non-transitory computer-readable storage medium with executable instructions (Claim 1), thus Claims ??? fall within one of the four statutory categories. See MPEP 2106.03.
Step 2A, Prong 1:
The claimed invention recites an abstract idea according to MPEP §2106.04. The independent claims which recite the following claim limitations as an abstract idea, are underlined below.
Claims 1-20 recite (as represented by the language of Claim 1):
identifying a pane rendered on a user interface of a computer system, the rendered pane being associated with an asset bucket comprising a persistent working space for processing an input asset;
detecting an activity in the pane, wherein the activity comprises inputting the input asset onto the asset bucket;
determining a feature associated with the input asset based on one or more of a geolocation, a timestamp, or a range of times associated with the input asset;
automatically identifying, from stored real-time multimedia data and based on the determined feature of the input asset, a related asset related to the input asset;
retrieving the related asset related to the input asset in the asset bucket based on a threshold corresponding to the determined feature;
rendering an output of an action in the asset bucket, wherein rendering the output comprises displaying, via the user interface of the computer system, the related asset related to the input asset; and
storing a digital file comprising the rendered output of the action.
The underlined claim limitations as emphasized above, as drafted, recite a process that, under its broadest reasonable interpretation covers the performance of managing personal behavior or relationships or interactions between people in the form of identifying and providing related assets. Other than reciting a computer implementation, nothing in the claim elements precludes the step from encompassing the performance of managing personal behavior or relationships or interactions between people which represents the abstract idea of certain methods of organizing human activity. But for the recitation of generic implementation of computer system components, the claimed invention merely recites a process for providing an asset, identifying related assets, and providing those related assets. For example, a user could place an asset in a workspace, identify related assets, be provided any related assets, and store those related assets in a file.
Step 2A, Prong 2:
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements such as:
a computer system; one or more processor; and/or one or more computer-readable media/memory storing computer-executable instructions;
a user interface of a computer system, including rendered panes, asset buckets, and persistent working space for processing input data;
a user interface for rendering and displaying output;
and storing data in digital files.
In particular, the additional elements cited above beyond the abstract idea are recited at a high-level of generality and simply equivalent to a generic recitation and basic functionality that amount to no more than mere instructions to apply the judicial exception using generic computer technology components.
Additionally, elements such as an asset bucket comprises a persistent working space, a pane associated with an asset bucket, and rendering an output of an action in the asset bucket represent generic interface elements for interacting with the assets (such as inputting, viewing, etc.). These elements are recited at a high-level of generality and simply equivalent to a generic recitation and basic functionality that amount to no more than mere instructions to apply the judicial exception using generic computer technology components.
Accordingly, since the specification describes the additional elements in general terms, without describing the particulars, the additional elements may be broadly but reasonably construed as generic computing components being used to perform the judicial exception (see specification at [0036], reciting generic computer and device types). These claimed additional elements merely recite the words “apply it" (or an equivalent) with the judicial exception, or merely include instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e)). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea.
Step 2B:
The claims do not include additional elements, individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept at Step 2B. Thus, the claim is not patent eligible.
Dependent Claims:
Claims 2-9, 11-17, 19, and 20 recite further elements related to the steps of the parent claims. These activities fail to differentiate the claims from the related activities in the parent claims and fail to provide any material to render the claimed invention to be significantly more than the identified abstract ideas, as outlined below.
Claims 2, 11, and 19 recite “processing the input asset in the persistent working space without altering configuration of an original source asset in a source of the input asset”, which further limits the steps of the parent claims, but does not make the claims any less abstract.
Claims 3, 12, and 20 recite “wherein the input asset comprises a selected multimedia file or environmental data”, which further limits the steps of the parent claims, but does not make the claims any less abstract.
Claims 4 and 13 recite “identifying a window that is associated with an asset selection; and detecting a dragging of the input asset from the window to the pane that is associated with the asset bucket”, which further limits the steps of the parent claims, but does not make the claims any less abstract.
Claims 5 and 14 recite “wherein the asset selection comprises one or more assets that are stored in a database or received from third-party servers”, which further limits the steps of the parent claims, but does not make the claims any less abstract. The database and servers are recited at a high level of generality and merely represent generic tools for storing or providing data. Therefore, they do not integrate the abstract idea into a practical application or provide an inventive concept.
Claims 6 and 15 recite “wherein the input asset in the asset bucket is associated with a device identification, the geolocation, and the timestamp”, which further limits the steps of the parent claims, but does not make the claims any less abstract.
Claims 7 and 16 recite “searching for one or more assets that are related to the input asset based on a geolocation proximity of the one or more assets to the input asset”, which further limits the steps of the parent claims, but does not make the claims any less abstract.
Claims 8 and 17 recite “wherein the action comprises searching for one or more assets that are related to the input asset based on a temporal proximity of the one or more assets to the input asset”, which further limits the steps of the parent claims, but does not make the claims any less abstract.
Claim 9 recites “wherein the activity comprises providing a link to a location in memory of the input asset”, which further limits the steps of the parent claims, but does not make the claims any less abstract. The memory is recited at a high level of generality and merely represent generic tools for storing data. The links are recited at a high level of generality and merely represent generic tools for accessing data (it is also noted that these links are only provided and not actively used for any activities). Therefore, they do not integrate the abstract idea into a practical application or provide an inventive concept.
The claims do not provide any new additional limitations or meaningful limits beyond abstract idea that are not addressed above in the independent claims therefore, they do not integrate the abstract idea into a practical application nor do they provide significantly more to the abstract idea. Thus, after considering all claim elements, both individually and as a whole, it has been determined that the claims do not integrate the judicial exception into a practical application or provide an inventive concept. Therefore, Claims 2-9, 11-17, 19, and 20 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 9-12, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miura et al. (Pub. No. US 2022/0276750 A1) in view of Zhao et al. (CN 110765285 A) in further view of Kuivamaki (GB 2517998 A).
In regards to Claims 1, 10, and 18, Miura discloses:
A computer system/method, comprising: a processor; and a computer-readable storage media/memory including instructions that, when executed with the processor, cause the computing device: (see at least [0010])
identifying a pane rendered on a user interface of a computer system, the rendered pane being associated with an asset bucket comprising a persistent working space for processing the input asset; ([0240]; [0241], “memory detail view” related to a set of media items (assets); [0150], “views” are equivalent to “windows”, a set/collection of media items is comparable to an asset bucket; Fig. 6J; [0265]; [0266], as focus is provided to content, other content (such as photo galleries) are still viewable/accessible, indicating that they are “persistent” windows or areas of the display, which represents persistent working spaces, it is also noted that the un-displayed photos are not deleted from the collection (these aspects of the interfaces are also relevant to the interface in examples such as Fig. 6A-Fig. 6R; Fig. 9A-Fig. 9C; [0097]; [0332]; [0333]; [0376]-[0379]; [0383]; etc. used in the subsequent citations), the workspaces, panes, etc. are rendered on the screens of computing devices providing interfaces for user interaction)
detecting an activity in the pane wherein the activity comprises inputting the input asset onto the asset bucket; (Fig. 9A-Fig. 9C; [0376], user selects (activity) an image (asset) and that image is added to a “one up view” interface (representing an asset bucket/persistent workspace/pane))
determining a feature associated with the input asset based on one or more of a geolocation, a timestamp, or a range of times associated with the input asset; (Fig. 6A-Fig. 6E; [0229]; [0232]; [0272]; [0302]; etc., assets are associated with features that are used to identify related assets, including geographic location, time ranges, and timestamps).
identifying based on the determined feature of the input asset, a related asset related to the input asset; (Fig. 9A-Fig. 9C; [0376]-[0379], the detail user interface (“one up view”), user can make a request (swipe up) to identify related content (assets); [0332]; [0333], additional example of requesting to search for related content, user selects an “indicator” such as a face (feature) (there are also other types of indicators/features) to search for content related by that face/person (this is not the only example in the reference of requesting and/or retrieving related content); [0383], demonstrates the similarities/relationships between the embodiments in Fig. 9A-Fig. 9C and Fig. 6A-Fig. 6R, etc.)
retrieving the related asset related to the input asset in the asset bucket; (Fig. 9A-Fig. 9C; [0376]-[0379], the detail user interface (“one up view”), user can make a request (swipe up) to identify related content (assets); [0332]; [0333], additional example of requesting to search for related content, user selects an “indicator” such as a face (feature) (there are also other types of indicators/features) to search for content related by that face/person (this is not the only example in the reference of requesting and/or retrieving related content); [0383], demonstrates the similarities/relationships between the embodiments in Fig. 9A-Fig. 9C and Fig. 6A-Fig. 6R, etc.)
rendering an output of an action in the asset bucket wherein rendering the output comprises displaying, via the user interface of the computer system, the related asset related to the input asset; (Fig. 9A-Fig. 9C; [0376]-[0379]; etc., the related content (images, etc.) are rendered and displayed in the workspace (asset bucket), “…displays a detail user interface comprising related content for the first visual media item that has been determined to be related to the first visual media item…”)
storing a digital file comprising the rendered output of the action ([0240], “…detail user interface (also referred to as a “memory detail view” hereinafter)…”, the detail user interface being where the output is rendered/displayed; [0313], “…provided the ability to save a memory detail view for later viewing.”, the workspace with the content can be saved in a case file)
Miura discloses the above system/method for searching and identifying related content (such as related images, video, etc.). Miura dopes not explicitly disclose the use of a threshold for identifying related content, however, Zhao teaches:
retrieving the related asset based on a threshold corresponding to the feature (page 13, lines 1-8, “(4) face image matching and identifying:”, images are identified by matching facial characteristics (features of the image) to demine a similarity match, the similarity is additional applied to a threshold)
This known technique is applicable to the system of Miura as they both share characteristics and capabilities, namely, they are directed to identifying related or similar content (including by using faces in images).
One of ordinary skill in the art would have recognized, before the effective filing date of the claimed invention, that applying the known technique of Zhou would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Zhou to the teachings of Miura would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such data processing features into similar systems. Further, applying the similarity threshold to Miura with comparable image matching techniques, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow additional accuracy and confidence in matching content and reduce unrelated results based on established similarity levels.
Miura/Zhao disclose the above system/method for inputting assets, identifying related assets based on content, and rendering related assets (including media collections). Miura/Zhao does not explicitly disclose that the related assets are stored real-time multimedia data, however, Kuivamaki teaches:
automatically identifying, from stored real-time multimedia data and based on the determined feature of the input asset, a related asset related to the input asset; (page 5, lines 11-30, media (including a user’s photographs) include metadata that provides context (determined features such as location, date, time, etc.); Abstract, the system collates media content based on context data (creates a collection of multimedia, such as image, photograph, current location data, current weather data, etc.); Figure 5b; page 2, lines 11-22, can use the context (such as location and time) to identify content including images and current weather (the current weather at the location represents “stored real-time multimedia data” that is associated with other content/assets); page 21, lines 21-30, the related assets (including the “stored real-time multimedia data”), is automatically identified based on context of an image (asset), “…action of taking photographs acts as a trigger for the apparatus to collate the just-captured photographic data 504 with other data associated with the user's current context. In this example the other data includes the current weather conditions 506, the current date and time 508, the current location 510, a textual message composed by the user 512, the last music tracks which the user listened to while snowboarding…”, representing multiple types of “stored real-time multimedia data”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Miura/Zhao so as to have included automatically identifying, from stored real-time multimedia data and based on the determined feature of the input asset, a related asset related to the input asset, as taught by Kuivamaki in order to allow users to relay information in a personal and appealing way to friends without having to manually collect and collate it (Kuivamaki, page 21, lines 21-30; page 22, lines 15-24; page 23, lines 14-30). One of ordinary skill in the art would understand how to apply the automatic identification of stored real-time multimedia data related to the context/features of the photograph in Kuivamaki to the context/features of the selected/manipulated photos in Miura/Zhao.
It is also noted that, although the rendering and storing of a digital file of the related asset is disclosed in Miura/Zhao, Kuivamaki also performs these steps on the retrieved stored real-time multimedia data (see at least Figure 5b; page 10, lines 9-16; page 22, lines 4-14).
In regards to Claims 2, 11, and 19, Miura discloses:
the operations further comprising: processing the input asset in the persistent working space without altering configuration of an original source asset in a source of the input asset (Fig. 6A-Fig. 6R; Fig. 9A-Fig. 9C; [0332]; [0333]; [0376]-[0379]; [0383]; etc., as described in the parent claims (see also [0374]), the image (input asset) is processed in the working space to identify related images, related images can come from the same source as the input asset (this example the source being “…all media items (e.g., photos, videos) associated with their device…” which would include the input asset and the related assets (other examples of shared sources also appear throughout the reference such as albums, collections, etc.), the source assets for the related assets (such as stored files) are not altered when the input asset is processed (items can be edited later, however, the source items are not altered and are simply retrieved and displayed))
In regards to Claims 3, 12, and 20, Miura discloses:
wherein the input asset comprises a selected multimedia file data or environmental data ([0229]; Fig. 6A-Fig. 6E; [0232], see also [0272]; [0285]; Fig. 9A-Fig. 9C; [0376], user selects (activity) an image (asset) and that image is added (input)).
In regards to Claim 9, Miura discloses:
wherein the activity includes providing a link to a location in memory of the inputted asset ([0207], “As used here, the term “affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (FIGS. 1, 3, and 5). For example, an image (e.g., icon), a button, and text (e.g., hyperlink) each optionally constitute an affordance.”, selecting a thumbnail (see [0215]) to access an image in the memory would show the providing of a link (the thumbnail being the affordance that includes the link)).
Claim(s) 4, 5, 13, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miura in view of Zhao in further view of Kuivamaki in further view of official notice (now considered admitted prior art).
In regards to Claims 4 and 13, Miura discloses:
wherein the acts further comprise:
identifying a window that is associated with asset selections; ([0240]; [0241], “memory detail view” related to a set of media items (assets); [0150], “views” are equivalent to “windows” (see Applicant’s specification at [0018], “Likewise, "window" and "pane" may indicate similar items and may be used interchangeably without affecting the meaning of the context in which they are used.”, a set/collection of media items is comparable to an asset bucket and
detecting a dragging of the input asset from [one location to a second location] ([0158], “…event…is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end).”).
Miura discloses the use of windows/panes (including for displaying assets and asset buckets) and the ability to drag a displayed object, as shown above. Although it is implied by the descriptions and includes all the necessary items/activities to be perform, Miura/Zhao/Kuivamaki does not explicitly disclose the dragging of the asset/object from one window/pane to another window/pane. However, dragging an asset/object from one window/pane to another window/pane is old and well known to those of ordinary skill in the art, and official notice to that effect is hereby taken. For example, drag-and-drop techniques on touch screen displays (such as those used in Miura) have been provided on devices for manipulating displayed objects for many years.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Miura/Zhao/Kuivamaki so as to have included dragging an asset/object from one window/pane to another window/pane in order to provide quick and efficient object movement using common and known touch screen interaction techniques for manipulating objects in a display (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results."), and since doing so could be performed readily and easily by any person of ordinary skill in the art, with neither undue experimentation, nor risk of unexpected results.
In regards to Claims 5 and 14, Miura discloses:
wherein the asset selections include one or more assets that are stored in a database or received from third-party servers ([0004]-[0006]; [0472], stores a library of user images and videos in the memory, the library (organized set of assets) represents a database).
Claim(s) 6-8 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miura in view of Zhao in further view of Kuivamaki in further view of Yoon et al. (Pub. No. US 2016/0034539 A1).
In regards to Claims 6 and 15, Miura/Zhao/Kuivamaki discloses:
wherein the input asset in the asset bucket is associated with the geolocation, and the timestamp ([0229]; Fig. 6A-Fig. 6E; [0232]).
Miura also discloses the above data/tags (in addition to the above citations, Miura also discloses that picture/video data can be metadata, see [0101]; [0272], etc.). Miura/Zhao/Kuivamaki does not explicitly disclose the use of device identification data, however, Yoon teaches:
wherein the asset is associated with a device identification, geolocation, and a timestamp ([0089]-[0091], “When content is a photograph, metadata may include, for example, theme and event information extracted from the photograph (such as a family trip or a summer vacation), time information, place information obtained by using global positioning system (GPS) information, person information obtained by using face information extracted from the photograph, and device information such as a manufacturer or a model name of a camera…An image tag may include a camera manufacturer, a camera model, a firmware version, a photographed time, a stored time, latitude and longitude of a photographed place, a photographer, a shutter speed, a focal length, an iris value, an exposure adjustment value, a photographing program, a photometry mode, a white balance, a black and white reference points, resolution, a size, an orientation, a file compression method, an image name, and copyright holder information. Also, for example, metadata of a photograph may include information about an identification value of a device that captured the photograph, a location where the photograph is stored, authority to use the photograph, a time when the photograph is executed, and a location where the photograph is executed...”
It would have been obvious to one of ordinary skill in the art, before to the effective filing date of the claimed invention, to have further modified the system of Miura/Zhao/Kuivamaki so as to have included wherein the asset is associated with a device identification, as taught by Yoon.
Miura/Zhao/Kuivamaki discloses a “base” method/system in which media items are associated with multiple metadata tags, as shown above. Yoon teaches a comparable method/system in which media items are associated with multiple metadata tags, as shown above. Yoon also teaches an embodiment in which the asset is also associated with a device identification in addition to a geolocation, and a timestamp, as shown above. One of ordinary skill in the art would have recognized the adaptation of wherein the asset is associated with a device identification to Miura/Zhao/Kuivamaki could be performed with the technical expertise demonstrated in the applied references. (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.")
In regards to Claims 7 and 16, Miura discloses:
wherein the action includes searching for one or more assets that are related to the inputted asset based on a geolocation proximity of the one or more assets to the inputted asset (Fig. 60; [0282]-[0284]; [0235], assets are shown on a map (geolocation) based on the proximity of their location characteristics/aspects).
In regards to Claims 8 and 17, Miura discloses:
wherein the action includes searching for one or more assets that are related to the inputted asset based on a temporal proximity of the one or more assets to the inputted asset ([0272], assets are timestamped; [0231], assets can be retrieved based on temporal proximity, in this example, the temporal proximity is the same date from a different year).
Additional Prior Art Identified but not Relied Upon
Beckett, JR. (Pub. No. US 2015/0212654 A1). Discloses adding of assets to a persistent workspace (see at least [0067]-[0069]).
Grosz et al. (Pub. No. US 2016/0139761 A1). Discloses identifying similar content/assets (see at least [0211]; [0270]).
Kim (Pub. No. US 2022/0319232 A1). Discloses identifying similar content/assets including thresholds (see at least [0011]; [00061]; [0082]).
Millar et al. (Pub. No. US 2024/0039911 A1). Discloses techniques for correlating organizing assets, matching assets stored in “buckets”, and correlating assets based on location (see at least [0123]).
Overton (WO 9831138 A1). Discloses the organizing and/or sequencing of images using data such as location, timestamp, device identification etc. (see at least page 15, line9 to page 17, line 15).
Response to Arguments
Applicant’s arguments filed 12/23/2025 have been fully considered but they are not persuasive.
I. Rejection of Claims under 35 U.S.C. §101:
Applicant argues that “These claim elements do not include managing any law enforcement personnel or any perpetrators related to the incident. Instead, these claim elements recite managing assets (evidence) related to the incident.“. Although it appears that this refers to an example in the specification, it is unclear how this argument pertains to the instant claims (which are much broader) and/or it demonstrates that the claims are not drawn to any abstract idea or certain methods of organizing human activity. Applicant provides no additional arguments or explanation. Applicant is also reminded that the illustrative examples provided in M.P.E.P. § 2106.04(a)(2) are not an exhaustive list and abstract ideas are not limited to these examples.
Applicant argues that the claims were not analyzed as a whole, but provides no evidence, such as what material Applicant believes was not analyzed. Applicant asserts that they were not analyzed as a whole, but does not point any deficiencies in the analysis to demonstrate this. The claim elements were considered both individually and as a whole.
Applicant asserts that the claimed invention provides improvements such as, “accuracy and speed with which related assets can be automatically identified… reduces computer loads…eliminates the need for manual matching or searching… efficiency of managing the input asset, the related asset, or other asset”. However, Applicant does not provide any background or evidence to support these assertions. For example, Applicant does not provide any evidence regarding exiting systems, why they are deficient, how Applicant’s claimed invention solves these problems in a meaningful, manner, etc. Applicant merely asserts that the claimed invention provides these alleged benefits, but provide no explanation of how/why.
Applicant fails to provide any evidence or explanation of how the claimed invention is “meaningful and [significantly] more than the abstract idea” and/or how/why it “forms a specific, discrete implementation of an inventive method”. Applicant makes reference to rejection language, but provides no explanation regarding how/why it would be a mischaracterization.
Several of the above issues were addressed in the previous office action, including making assertions without providing any evidence, explanation, or context. Those remarks are provided here for reference:
(A) (1) Applicant asserts that the office did not properly identify the judicial exceptions and additional features. However, Applicant has not provided any response indicating the alleged problems with the rejection, such as identifying what Applicant believes the judicial exceptions and additional features and why the office was incorrect or incomplete. It is noted that the updated rejection language above may provide additional clarity for Applicant’s understanding. The claim language has been updated for clarity and to address the new claim language, however, the analysis outcome and grounds of rejection have not changed.
(A) (2) Applicant asserts that the claims do not recite an abstract idea. Applicant then recites the claim language, but fails to explain how/why this does not represent an abstract idea. The bolded words in the claims do not clearly demonstrate why Applicant believes this is not an abstract idea. As explained above, the processor and interface elements are merely generic computer components for applying the steps of the claims.
Applicant recites an example from MPEP 2106.04(a)(1), but provides no explanation regarding how this applies to Applicant’s claims.
(B) (1) The claims limitations were analyzed individually and as a whole. Applicant alleges that this was not done, but provides no arguments or evidence.
(B) (2) Applicant appears to be attempting to provide an explanation of improvements, however, much of the remarks are based on explanations of activities or benefits that the claims invention “can do” but are not clearly part of the claimed invention. There is not sufficient background or evidence (including the citations to the specification) to support the alleged improvement and/or explain how the additional elements provide this improvement. Applicant does not clearly identify the additional elements and/or explain how they would provide an improvement or integrate the claim into a practical application.
See MPEP 2106.05(a), Improvements to the Functioning of a Computer or To Any Other Technology or Technical Field (“If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.”).
(C) This section repeats similar arguments to those addressed above and are subject to the same responses as provided above.
II. Rejection of Claims under 35 U.S.C. §103:
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant’s arguments with respect to Claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN D SENSENIG whose telephone number is (571)270-5393. The examiner can normally be reached M-F: 10:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on 571-272-6872. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.S/Examiner, Art Unit 3629
/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626