DETAILED ACTION
This Action is a response to the filing received 27 October 2023. Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 21 November 2023 is being considered by the examiner.
Claim Objections
Claims 2, 4-5, 15 and 17-18 are objected to because of the following informalities:
Claim 2, line 2 should be corrected to read “determining, for each device in the subset of the plurality of devices”;
Claim 4, line 2 should corrected to read “that likely caused the potential performance issue”;
Claim 5, lines 2-3 should corrected to read “that likely caused the potential performance issue”; and
Claims 15 and 17-18 should be corrected in like manners as claims 2 and 4-5. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 and 10-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
At Step 1, Examiner finds that claims 1-13 are directed to processes, claims 14-19 are directed to machines, and claim 20 is directed to an article of manufacture.
At Step 2A, Prong 1, the claims are evaluated for whether they recite (set forth or describe) a judicial exception, such as an abstract idea.
Claim 1 recites the following mental process steps: (1) determining, for a metric attribute and a subset of the plurality of devices having at least one common context, a potential performance issue for the subset of devices using aggregated metric data for the metric attribute; and (2) determining, using at least a portion of the aggregated metric data, a portion of a code base or a hardware subcomponent that likely caused the potential performance issue. Each is an observation, evaluation, judgment or opinion; that is, a human engineer or tester may determine based on a particular metric (such as power consumption), using aggregated metric data for a subset of devices having a common context (such as a device type or operating system type or version) that a potential performance issue has occurred (i.e., rapidly depleting battery). The human engineer or tester may determine that this is the case based on the fact that such occurs specifically in devices of a same type, by reviewing aggregated metric data related to that type. The human user or tester may determine that the cause is an application having particularly high power consumption, based on the aggregated metric data.
It is important to note that the performance issue is determined “using aggregated metric data for the metric attribute that was generated using the metric data from the devices in the subset.” That is, the claim does not affirmatively recite performing the aggregation, merely using data that has been aggregated.
In view of the foregoing, claim 1 recites a mental process, and the additional elements are considered at Step 2A, Prong 2, for whether they integrate the mental process into a practical application. The additional elements include: (1) maintaining metric data for an application that executed on a plurality of devices at least some of which have different contexts, (2) providing aggregated metric data for the metric attribute that is generated using the metric data from the devices, and (3) providing for display the data for the portion of the code base or hardware subcomponent that likely caused the potential performance issue. Elements (1) and (2) recite necessary pre-solution data gathering, such as from memory or over a data connection; and element (3) recites the output of the results of the mental process steps identified above. Whether considered alone or in combination, these elements do not integrate the mental process into a practical application.
At Step 2B, the additional elements are further evaluated for whether they recite significantly more than the abstract idea. The analysis provided at Step 2A, Prong 2 is incorporated, and the extra solution activity and other elements are further evaluated for whether they recite other than what is well-understood, routine and/or conventional in the field. The additional elements recite well-understood, routine and/or conventional computer functions, claimed at a high level of generality, such as storing and retrieving information from memory or over a network (i.e., maintaining and/or retrieving metric attribute and (aggregated) metric data), and presenting data and information on a display (see MPEP § 2106.05(d)). In view of the foregoing, the additional elements do not recite significantly more than the abstract idea, and claim 1 is accordingly found ineligible.
Claims 14 and 20 are ineligible for the same reasons as those provided with respect to claim 1. Examiner notes that claim 14 additionally recites a computer system having a computer configurable to execute instructions in storage, and claim 20 recites storage media including instructions executable on a computer. These additional elements recite additional well-understood, routine and/or conventional computing functions, and/or merely set forth that the mental process may be performed on a general-purpose computer and/or using a general-purpose computer as a tool.
Claims 2-5 and 15-18 further recite determining a software signature for code executed on each device in the subset when the metric data was generated, determining whether the software signatures satisfy a similarity criteria and based thereon determining the potential performance issue, receiving the signature from each device, using the signatures to determine the software or hardware that likely caused the issue, determining a counter identifying a code portion of hardware subcomponent that likely caused the performance issue and providing that counter for display. These represent additional mental process steps (similarity threshold, determining performance issue, determining the software or hardware that caused the issue including using a counter) and/or insignificant / well-understood, routine and/or conventional extra-solution activity (determining the signatures, receiving the signatures, determining the counter, providing counter information for display). As above with respect to claim 1, there is no indication that these claims affirmatively recite actually generating the signatures or the counters, but merely “determines” them, such as by retrieving this information.
Claims 6 and 19 further recite determining a performance change in each device in a subset using metric data for the devices, determining a common context change of the devices using device data, and determining the potential performance issue using the performance change and the context change. These are further observations, evaluations, judgments and/or opinions a human tester or engineer may make based on available data.
Claim 10 further recites receiving corresponding metric data from a device when the device determines that a performance issue threshold is satisfied, which describes necessary pre-solution data gathering.
Claim 11 further recites that the metric data comprises a log, which describes the format of the data that is gathered and evaluated.
Claim 12 further recites that the context may be a software or hardware context, which describes the type of data that may be gathered and evaluated.
Claim 13 further recites that determining the portion of the code base or hardware subcomponent that likely caused the performance issue comprises determining the portion of code executed on a device or that provides one or more services to the device that caused the performance issue, which further describes the determination thereof made with respect to claim 1.
None of these additional features integrate the abstract idea into a practical application or recite significantly more than the abstract idea, whether considered alone or in combination. Accordingly, claims 2-6, 10-13 and 15-19 are additionally ineligible.
Examiner notes that claim 7 does recite generating the signatures based on a determination that a potential performance issue has occurred and further based on a determination of the software that executed on the device at the time that the metric data used to evaluate the possibility of a performance issue has occurred. These additional steps are representative of those that integrate the identified mental process steps into a practical application, and Examiner accordingly finds that claim 7, as well as claims 8-9 which depend therefrom, recite eligible subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 6, 10-12, 14 and 19-20 are rejected under 35 U.S.C. § 103 as being unpatentable over Gupta et al., U.S. 2015/0161386 A1 (“Gupta”) in view of Moretto et al., U.S. 2016/0314064 A1 (“Moretto”).
Regarding claim 1, Gupta teaches: A computer-implemented method (Gupta, e.g., ¶45, “methods, and mobile devices configured to implement the methods …”) comprising:
maintaining, for a plurality of devices at least some of which have different contexts from a plurality of contexts, metric data for an application that executed on each of the plurality of devices (Gupta, e.g., ¶64, “full classifier model may be a robust data model that is generated as a function of a large training dataset …” See also, e.g., ¶66, “trained on a very large number of features in order to support all makes and models of mobile devices …” and ¶68, “full classifier model may be generated by a network server configured to receive a large amount of information regarding mobile device behaviors and states, features, and conditions during or characterizing those behaviors … generate the full classifier model to include all or most of the features, data points, and/or factors that could contribute to the degradation over time of any of a number of different makes, models, and configurations of mobile devices.” See also, e.g., ¶60, “access and use of the biometric sensors in the first mobile device may indicate that a malicious application is authorizing financial transactions without the user’s knowledge or consent. On the other hand, features that test conditions relating to the access and use of these sensors are not likely to be relevant … in a second mobile device which is not configured to use its biometric sensors to authorize financial transactions … first and second devices may be identical in all aspects (i.e., are the same type, model, operating system, software, etc.) except for their configuration for the use of their biometric sensors …” Examiner’s note: the full classifier model (and the complete corpus of collected metric data from all monitored applications across all devices) include metric data maintained for an application (such as the exemplary malicious application) across a plurality of mobile devices, the plurality of mobile devices sharing context (type, model, software, etc.) and having some context differences (sensor configurations, etc.));
determining, for a metric attribute from a plurality of metric attributes and a subset of the plurality of devices each of which have at least one common context from the plurality of contexts, a potential performance issue for the subset of the plurality of devices using aggregated metric data for the metric attribute that was generated using the metric data from the devices in the subset of the plurality of devices (Gupta, e.g., ¶64, “lean classifier model may be a more focused data model that is generated from a reduced dataset that includes or prioritizes tests on the features/entries that are most relevant for determining whether a particular mobile device behavior is benign or not benign …” See also, e.g., ¶65, “application-based classifier model may be an application-specific classifier model or an application-type specific classifier model …” See also, e.g., ¶67, “generating lean classifier models locally in the mobile device accounting for device-specific features and/or device-state-specific features …” and ¶71, “mobile device may use behavior modeling and machine learning techniques to intelligently and dynamically generate the lean classifier models so that they account for device-specific and/or device-state-specific features of the mobile device (e.g., features relevant to the mobile device configuration, functionality, connected/included hardware, etc.), include, test or evaluate a focused and targeted subset of the features that are determined to be important for identifying a cause or source of the mobile device’s degradation over time, and/or prioritize the targeted subset of features based on probability or confidence values identifying their relative importance for successfully classifying a behavior in the specific mobile device in which they are used/evaluated.” Examiner’s note: the full classifier model is reduced to a lean classifier model based on characteristics particular to a specific device, to include at least hardware, software, or other configuration features; and further the dataset is reduced to include or prioritize tests/values that pertain to the characteristics. Therefore, the processing of one or more types of telemetry or behavior data from a particular device (i.e., a metric attribute from a plurality of metric attributes) is performed based on a model including device behavior data aggregated from a subset of devices and occurrences where there is common context (i.e., a subset of all devices having one or more, but less than all, common state and/or configuration characteristics) and included in the lean classifier model);
determining, using at least a portion of the aggregated metric data, a portion of a code base or a hardware subcomponent that likely caused the potential performance issue (Gupta, e.g., ¶61, “Examples of factors that may contribute to performance degradation include poorly designed software applications, malware … fragmented memory …” See also, e.g., ¶¶199-200, “mobile device processor determines that the suspicious behaviors or potential problems can be identified and corrected based on the results of the behavioral analysis … determine that there is a likelihood of a problem by computing a probability of the mobile device encountering potential problems and/or engaging in suspicious behaviors … greater than a predetermined threshold …” Examiner’s note: given that at least some of the lean models used in behavior analysis are application-specific, and that other factors under consideration include hardware such as memory, the process determines that a particular application and/or hardware element, in at least some embodiments, needs to be corrected in order to remediate and/or prevent the occurrence of a particular problem).
Gupta does not more particularly teach providing for presentation on a display data for the portion of the code base that likely caused the potential performance issue. However, Moretto does teach: providing, for presentation on a display, data for the portion of the code base or the hardware subcomponent that likely caused the potential performance issue (Moretto, e.g., FIG. 12 and ¶¶69-76, showing an example of a chart that displays a portion of a code base (i.e., a particular subsection of an application) that contributes to the possibility of an error based on a load applied to the application and the different code base (application) portions) for the purpose of determining the root causes of and potential means for fixing or optimizing the performance issues in a cloud application (Moretto, e.g., ¶¶4-9).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for classifying mobile device application behaviors as taught by Gupta to provide for providing for presentation on a display data for the portion of the code base that likely caused the potential performance issue because the disclosure of Moretto shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for classifying cloud application performance bottlenecks to provide for providing for presentation on a display data for the portion of the code base that likely caused the potential performance issue for the purpose of determining the root causes of and potential means for fixing or optimizing the performance issues in a cloud application (Moretto, Id.).
Claims 14 and 20 are rejected for the reasons given in the rejection of claim 1 above. Examiner notes that with respect to claim 14, Gupta further teaches: A system comprising one or more computers and one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations (Gupta, e.g., FIG. 12 and ¶204, “aspects may be implemented on a variety of computing devices, an example of which is illustrated in FIG. 12 in the form of a smartphone … include a processor 1202 coupled to internal memory 1204 …” See also, e.g., ¶207, “processors … can be configured by software instructions (applications) to perform a variety of functions … software applications may be stored in the internal memory [] before they are loaded into the processor …”) comprising: [[[the method of claim 1]]]; and with respect to claim 20, Gupta further teaches: One or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations (Gupta, e.g., FIG. 12 and ¶204, “aspects may be implemented on a variety of computing devices, an example of which is illustrated in FIG. 12 in the form of a smartphone … include a processor 1202 coupled to internal memory 1204 …” See also, e.g., ¶207, “processors … can be configured by software instructions (applications) to perform a variety of functions … software applications may be stored in the internal memory [] before they are loaded into the processor …”) comprising: [[[the method of claim 1]]].
Regarding claim 6, the rejection of claim 1 is incorporated, and Gupta further teaches: wherein determining the potential performance issue comprises: determining, using the metric data for the devices in the subset of the plurality of devices, a performance change in each device in the subset (Gupta, e.g., ¶85, “device state monitoring engine operating on the mobile computing device may continually monitor the mobile device for changes in the mobile device’s configuration and/or state … may look for configuration and/or state changes that may impact the performance or effectiveness …”);
determining, using data for the devices in the subset of the plurality of devices, a common context change (Gupta, e.g., ¶86, “notify a device state specific feature generator when the device state monitoring engine detects a state change, and the device state specific feature generator may signal the behavior analyzer module to add or remove certain features based on the mobile device’s state change.” Examiner’s note: the full classifier model is reduced to a lean classifier model based on characteristics particular to a specific device, to include at least hardware, software, or other configuration features; and further the dataset is reduced to include or prioritize tests/values that pertain to the characteristics. Therefore, the processing of one or more types of telemetry or behavior data from a particular device (i.e., a metric attribute from a plurality of metric attributes) is performed based on a model including device behavior data aggregated from a subset of devices and occurrences where there is common context (i.e., a subset of all devices having one or more, but less than all, common state and/or configuration characteristics) and included in the lean classifier model. That is, in view of the updating of the features in the lean classifier model, the comparison is now being made among a subset of the devices having a common context change (i.e., to a different state and/or configuration)); and
determining the potential performance issue using the performance change in each device in the subset and the common context change (Gupta, e.g., ¶61, “Examples of factors that may contribute to performance degradation include poorly designed software applications, malware … fragmented memory …” See also, e.g., ¶¶199-200, “mobile device processor determines that the suspicious behaviors or potential problems can be identified and corrected based on the results of the behavioral analysis … determine that there is a likelihood of a problem by computing a probability of the mobile device encountering potential problems and/or engaging in suspicious behaviors … greater than a predetermined threshold …” Examiner’s note: given that at least some of the lean models used in behavior analysis are application-specific, and that other factors under consideration include hardware such as memory, the process determines that a particular application and/or hardware element, in at least some embodiments, needs to be corrected in order to remediate and/or prevent the occurrence of a particular problem).
Claim 19 is rejected for the additional reasons given in the rejection of claim 6 above.
Regarding claim 10, the rejection of claim 1 is incorporated, and Gupta further teaches: receiving, from a first device from the plurality of devices, corresponding metric data when the first device determines that a performance issue threshold is satisfied (Gupta, e.g., ¶155, “decision nodes 448 may be decision stumps … one level decision tree that has exactly one node that tests one condition or mobile device feature … applying a feature vector to a decision stump results in a binary answer (e.g., yes or not, malicious or benign, etc.) … ‘is the frequency of SMS transmissions less than x per min,’ … binary ‘yes’ or ‘no’ answer may then be used to classify the result as indicating that the behavior is either malicious (M) or benign (B) … processing to perform each stump is very simple …”).
Regarding claim 11, the rejection of claim 1 is incorporated, and Gupta further teaches: wherein the metric data comprises a log (Gupta, e.g., ¶99, “behavior observer module 202 may monitor … calls, and other instrumented component by reading information from log files …”).
Regarding claim 12, the rejection of claim 1 is incorporated, and Gupta further teaches: wherein the context comprises at least one of a hardware context or a software context (Gupta, e.g., ¶64, “full classifier model may be a robust data model that is generated as a function of a large training dataset …” See also, e.g., ¶66, “trained on a very large number of features in order to support all makes and models of mobile devices …” and ¶68, “full classifier model may be generated by a network server configured to receive a large amount of information regarding mobile device behaviors and states, features, and conditions during or characterizing those behaviors … generate the full classifier model to include all or most of the features, data points, and/or factors that could contribute to the degradation over time of any of a number of different makes, models, and configurations of mobile devices.” See also, e.g., ¶60, “access and use of the biometric sensors in the first mobile device may indicate that a malicious application is authorizing financial transactions without the user’s knowledge or consent. On the other hand, features that test conditions relating to the access and use of these sensors are not likely to be relevant … in a second mobile device which is not configured to use its biometric sensors to authorize financial transactions … first and second devices may be identical in all aspects (i.e., are the same type, model, operating system, software, etc.) except for their configuration for the use of their biometric sensors …” Examiner’s note: the full classifier model (and the complete corpus of collected metric data from all monitored applications across all devices) include metric data maintained for an application (such as the exemplary malicious application) across a plurality of mobile devices, the plurality of mobile devices sharing context (type, model, software, etc.) and having some context differences (sensor configurations, etc.). These contexts comprise both hardware and software contexts).
Claims 2-5, 7-9, 13 and 15-18 are rejected under 35 U.S.C. § 103 as being unpatentable over Gupta in view of Moretto, and in further view of Horovitz et al., U.S. 2016/0085664 A1 (“Horovitz”).
Regarding claim 2, the rejection of claim 1 is incorporated, but Gupta in view of Moretto does not more particularly teach determining software signatures for code executing on each corresponding device in the subset when the metric data was generated, and using the signature to determine whether signatures for code executed on devices in the subset satisfy a similarity criteria to determine the potential performance issue for the devices in the subset. However, Horovitz does teach: determining, for each device in the subset of plurality of devices, a software signature for code executed on the corresponding device when the metric data was generated (Horovitz, e.g., ¶¶26-27, “generate a testing application fingerprint 141 representing a response of the application, during testing, to the simulation 183 of the fault of the external service … based on the fault response indication 184 received from testing system … ‘fingerprint’ is a collection of information representing a state of a computer application at a given time … fault response indication 184 may include metrics collected for application 105 during fault simulation 183, after fault simulation 183, or both … indicating metrics, among the metrics received from testing system 150, that became abnormal in response to fault simulation …” See also, e.g., ¶29, “store testing application fingerprint 141 in repository 140 … associate, in repository 140, testing application fingerprint 141 with the external service … and the fault … with a description 142 of the fault and the external service … repository 140 may be used … to identify, based on fingerprint 141, the external service and external service fault of description 142 as sources of a problem detected during non-testing execution of application …”);
determining, using the software signatures, that the software signatures for the code executed on the devices in the subset of the plurality of devices satisfy a similarity criteria, wherein determining the potential performance issue for the subset of the plurality of devices is responsive to determining that the software signatures satisfy the similarity criteria (Horovitz, e.g., ¶50, “compare production application fingerprint 245 to each of testing application fingerprints 241-1-241-M … determine that production application fingerprint 245 is most similar to testing application fingerprint 241-M among the plurality of application fingerprints … determine that production application fingerprint 245 satisfies a similarity threshold relative to testing application fingerprint 241-1 …” See also, e.g., ¶39, “application 105 may be a composite application that production system 270 may run on multiple computing resources (e.g., servers) logically divided into multiple tiers … a given tier may include a plurality of the same type of computing resource (e.g., multiple servers) each contributing to the execution of the composite application …”) for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, e.g., ¶¶8-13, 66-68).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for classifying mobile device application behaviors as taught by Gupta in view of Moretto to provide for determining software signatures for code executing on each corresponding device in the subset when the metric data was generated, and using the signature to determine whether signatures for code executed on devices in the subset satisfy a similarity criteria to determine the potential performance issue for the devices in the subset because the disclosure of Horovitz shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for using application fingerprints to identify production application issues and their causes to provide for determining software signatures for code executing on each corresponding device in the subset when the metric data was generated, and using the signature to determine whether signatures for code executed on devices in the subset satisfy a similarity criteria to determine the potential performance issue for the devices in the subset for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, Id.).
Regarding claim 3, the rejection of claim 2 is incorporated, but Gupta in view of Moretto does not more particularly teach receiving the corresponding signatures from the devices in the subset. However, Horovitz does teach: receiving, from each device in the subset of the plurality of devices, the corresponding software signature (Horovitz, e.g., ¶47, “generate a production application fingerprint 245 representing a state of application 105 at (or proximate to) the time of the detected problem … based on production metrics 286 in any manner …” Examiner’s note: a software signature is a representation of a backtrace or other metrics generated proximate an issue, wherein in some instances the signature is generated based on metrics (see, e.g., Spec. at ¶¶42-47). This is consistent with the description in Horovitz) for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, e.g., ¶¶8-13, 66-68).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for classifying mobile device application behaviors as taught by Gupta in view of Moretto to provide for receiving the corresponding signatures from the devices in the subset because the disclosure of Horovitz shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for using application fingerprints to identify production application issues and their causes to provide for receiving the corresponding signatures from the devices in the subset for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, Id.).
Regarding claim 4, the rejection of claim 2 is incorporated, and Horovitz further teaches: wherein determining the portion of the code base or the hardware subcomponent that likely caused the potential performance issues uses the software signatures for the code executed on the devices in the subset of the plurality of devices (Horovitz, e.g., ¶50, “compare production application fingerprint 245 to each of testing application fingerprints … determine that production application fingerprint 245 is most similar to testing application fingerprint … satisfies a similarity threshold … identify the external service (i.e., external service 276-1) and the fault of description 242-1, associated with testing application fingerprint 241-1, as sources of the detected problem”).
Regarding claim 5, the rejection of claim 2 is incorporated, but Gupta in view of Moretto does not more particularly teach determining a counter that identifies the portion of the code base or hardware subcomponent to determine which likely caused the performance issue, and providing data comprising the counter for that portion of code or hardware subcomponent for presentation on the display. However, Horovitz does teach: determining the portion of the code base or the hardware subcomponent that likely caused the potential performance issues comprises determining a counter that identifies the portion of the code base or the hardware subcomponent (Horovitz, e.g., ¶50, “compare production application fingerprint 245 to each of testing application fingerprints … determine that production application fingerprint 245 is most similar to testing application fingerprint … satisfies a similarity threshold … identify the external service (i.e., external service 276-1) and the fault of description 242-1, associated with testing application fingerprint 241-1, as sources of the detected problem.” Examiner’s note: the identifier of external service 276-1 is representative of the counter, as it identifies the portion of the code base (i.e., the external service of a composite application) as the likely cause of the detected problem); and
providing the data for the portion of the code base or the hardware subcomponent that likely caused the potential performance issue comprises providing, for presentation on the display, the counter (Horovitz, e.g., ¶68, “engine 327 may further output a report 390 including an indication of the external service of description 342-y and including an indication 394 of the external service fault of description 342-y. Report may be output in any suitable manner … displayed (e.g., on a screen or other display of the computing device) …”) for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, e.g., ¶¶8-13, 66-68).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for classifying mobile device application behaviors as taught by Gupta in view of Moretto to provide for determining a counter that identifies the portion of the code base or hardware subcomponent to determine which likely caused the performance issue, and providing data comprising the counter for that portion of code or hardware subcomponent for presentation on the display because the disclosure of Horovitz shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for using application fingerprints to identify production application issues and their causes to provide for determining a counter that identifies the portion of the code base or hardware subcomponent to determine which likely caused the performance issue, and providing data comprising the counter for that portion of code or hardware subcomponent for presentation on the display for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, Id.).
Claims 15-18 are rejected for the additional reasons given in the rejections of claims 2-5 above.
Regarding claim 7, the rejection of claim 1 is incorporated, and Gupta further teaches: wherein determining the potential performance issue comprises: determining, for each device in the subset of the plurality of devices, that at least some of the metric data for the corresponding device indicates a candidate performance issue for the corresponding device (Gupta, e.g., ¶198, “perform coarse observations by monitoring/observing a subset of a large number of factors/behaviors that could contribute to the mobile device’s degradation … perform behavioral analysis operations …” See also, e.g., ¶200, “determine that there is a likelihood of a problem … probability is greater than a predetermined threshold …”);
in response to determining that at least some of the metric data for the corresponding device indicates the candidate performance issue for the corresponding device, generating, for each device in the subset of the plurality of devices, the corresponding software signature for code executed on the corresponding device when the metric data was generated (Gupta, e.g., ¶198, “generate a behavior vector characterizing the coarse observations and/or the mobile device behavior based on the coarse observations …” See also, e.g., ¶201, “may perform deeper logging/observations or final logging on the identified subsystems, processes or applications …”).
Gupta in view of Moretto does not more particularly teach using the signatures to determine whether the signatures for the devices in the subset satisfy a similarity criteria and in response thereto determining that the candidate performance issue for the devices in the subset are likely the same issue. However, Horovitz does teach: determining, using the software signatures, that the software signatures for the code executed on the devices in the subset of the plurality of devices satisfy a similarity criteria; and in response to determining that the software signatures satisfy the similarity criteria, determining that the candidate performance issues for the devices in the subset of the plurality of devices are likely the same performance issue (Horovitz, e.g., ¶50, “compare production application fingerprint 245 to each of testing application fingerprints 241-1-241-M … determine that production application fingerprint 245 is most similar to testing application fingerprint 241-M among the plurality of application fingerprints … determine that production application fingerprint 245 satisfies a similarity threshold relative to testing application fingerprint 241-1 …” See also, e.g., ¶39, “application 105 may be a composite application that production system 270 may run on multiple computing resources (e.g., servers) logically divided into multiple tiers … a given tier may include a plurality of the same type of computing resource (e.g., multiple servers) each contributing to the execution of the composite application …”) for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, e.g., ¶¶8-13, 66-68).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for classifying mobile device application behaviors as taught by Gupta in view of Moretto to provide for using the signatures to determine whether the signatures for the devices in the subset satisfy a similarity criteria and in response thereto determining that the candidate performance issue for the devices in the subset are likely the same issue because the disclosure of Horovitz shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for using application fingerprints to identify production application issues and their causes to provide for using the signatures to determine whether the signatures for the devices in the subset satisfy a similarity criteria and in response thereto determining that the candidate performance issue for the devices in the subset are likely the same issue for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, Id.).
Regarding claim 8, the rejection of claim 7 is incorporated, and Gupta further teaches: wherein determining that at least some of the metric data for the corresponding device indicates the candidate performance issue for the corresponding device comprises determining that a likelihood that the corresponding device has the candidate performance issue satisfies a likelihood threshold (Gupta, e.g., ¶198, “perform coarse observations by monitoring/observing a subset of a large number of factors/behaviors that could contribute to the mobile device’s degradation … perform behavioral analysis operations …” See also, e.g., ¶200, “determine that there is a likelihood of a problem … probability is greater than a predetermined threshold …”).
Regarding claim 9, the rejection of claim 7 is incorporated, and Gupta further teaches: in response to determining, using first metric data, that at least some of the first metric data for a first device indicates the candidate performance issue for the first device, requesting, from the first device, second metric data that is more detailed than the first metric data, wherein: generating the corresponding software signature for the code executed on the first device when the metric data was generated uses the second metric data that is more detailed than the first metric data (Gupta, e.g., ¶198, “perform coarse observations by monitoring/observing a subset of a large number of factors/behaviors that could contribute to the mobile device’s degradation … perform behavioral analysis operations …” See also, e.g., ¶200, “determine that there is a likelihood of a problem … probability is greater than a predetermined threshold …” See also, e.g., ¶198, “generate a behavior vector characterizing the coarse observations and/or the mobile device behavior based on the coarse observations …” See also, e.g., ¶201, “may perform deeper logging/observations or final logging on the identified subsystems, processes or applications …” Examiner’s note: this is an iterative process whereby the method collects increasingly detailed metrics / logs, generating signature vectors therefrom, and continues until it determines whether a performance issue can be identified).
Regarding claim 13, the rejection of claim 1 is incorporated, but Gupta in view of Moretto does not more particularly teach that determining the code portion or hardware subcomponent that likely caused the error comprises determining the portion of the code base, for code that executed on a device from the plurality of devices or a system that provides one or more service to the device, that likely caused the error. However, Horovitz does teach: wherein determining the portion of the code base or the hardware subcomponent that likely caused the potential performance issue comprises determining the portion of the code base, for code that was executed on a device from the plurality of devices or a system that provides one or more services to the device, that likely caused the potential performance issue (Horovitz, e.g., ¶50, “compare production application fingerprint 245 to each of testing application fingerprints … determine that production application fingerprint 245 is most similar to testing application fingerprint … satisfies a similarity threshold … identify the external service (i.e., external service 276-1) and the fault of description 242-1, associated with testing application fingerprint 241-1, as sources of the detected problem.”) for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, e.g., ¶¶8-13, 66-68).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for classifying mobile device application behaviors as taught by Gupta in view of Moretto to provide that determining the code portion or hardware subcomponent that likely caused the error comprises determining the portion of the code base, for code that executed on a device from the plurality of devices or a system that provides one or more service to the device, that likely caused the error because the disclosure of Horovitz shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for using application fingerprints to identify production application issues and their causes to provide that determining the code portion or hardware subcomponent that likely caused the error comprises determining the portion of the code base, for code that executed on a device from the plurality of devices or a system that provides one or more service to the device, that likely caused the error for the purpose of simulating a variety of faults on a variety of application components in diverse distributed application architectures and using that information in combination with production metric data gathering and application fingerprints to identify and provide solutions for real-time performance issues (Horovitz, Id.).
Conclusion
Examiner has identified particular references contained in the prior art of record within the body of this action for the convenience of Applicant. Although the citations made are representative of the teachings in the art and are applied to the specific limitations within the enumerated claims, the teaching of the cited art as a whole is not limited to the cited passages. Other passages and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art and/or disclosed by Examiner.
Examiner respectfully requests that, in response to this Office Action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application.
When responding to this Office Action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 C.F.R. 1.111(c).
Examiner interviews are available via telephone and video conferencing using a USPTO-supplied web-based collaboration tool. Applicant is encouraged to submit an Automated Interview Request (AIR) which may be done via https://www.uspto.gov/patent/uspto-automated-interview-request-air-form, or may contact Examiner directly via the methods below.
Any inquiry concerning this communication or earlier communication from Examiner should be directed to Andrew M. Lyons, whose telephone number is (571) 270-3529, and whose fax number is (571) 270-4529. The examiner can normally be reached Monday to Friday from 10:00 AM to 6:00 PM ET. If attempts to reach Examiner by telephone are unsuccessful, Examiner’s supervisor, Wei Mui, can be reached at (571) 272-3708. Information regarding the status of an application may be obtained from the Patent Center system. For more information about the Patent Center system, see https://www.uspto.gov/patents/apply/patent-center. If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (in USA or Canada) or (571) 272-1000.
/Andrew M. Lyons/Examiner, Art Unit 2191