Git Product home page Git Product logo

ovirtbackup's Introduction

oVirtBackup

This is a tool, written in Python, to make online fullbackup's of a VM which runs in an oVirt environment.

WARNING: This release has to be tested with oVirt 4.4. For oVirt 4.3 please take a look the the branches.

Requirements

It is necessary to install the oVirt Python-sdk. In addition when running Pyhon < 2.7 you need you install argparse.

http://www.ovirt.org/Python-sdk

Usage

Take a look at the usage text.

backup.py -h

Configuration

Take a look at the example "config_example.cfg"

Please avoid Cirillic symbols in the configuration otherwise you will get an exception see #59

Workflow

  • Create a snapshot
  • Clone the snapshot into a new VM
  • Delete the snapshot
  • Delete previous backups (if set)
  • Export the VM to the NFS share
  • Delete the VM

Useful tips

crontab

00  20  *   *   *   /home/backup/oVirtBackup.git/backup.py -c /home/backup/oVirtBackup.git/config_webserver.cfg -d >> /var/log/oVirtBackup/webserver.log 

Logrotate: /etc/logrotate.d/oVirtBackup

/var/log/oVirtBackup/* {
	daily
	rotate 14
	compress
	delaycompress
	missingok
	notifempty
}

Security

Set permissions to config.cfg only for the needed user (chmod 600 config.cfg).

TODO's

  • When the ovirtsdk supports exporting a snapshot directly to a domain, the step of a VM creation can be removed to save some disk space during backup

Useful links:

Running tests

Install tox.

$ tox

ovirtbackup's People

Contributors

exula avatar immert20 avatar kkoukiou avatar kobihk avatar lukas-bednar avatar mrfishfinger avatar nellyc avatar scotthesterberg avatar slavonnet avatar tareqalayan avatar tatref avatar valeriop avatar vladko312 avatar wefixit-at avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ovirtbackup's Issues

Backup for Storage

Hello, first congratulates him for the application.
The application exports the backup to a NFS, in my case I have a storage for backup, it is possible to export the snapashots to another volume Fibre Channel

Regards,
Jean Eduardo
Brazil

Failing VM backups with version 4.0.4

Hello Folks.

We are running RHEV 4 version, with the follwing package versions:

python-ovirt-engine-sdk4-4.0.1-1.el7ev.x86_64
ovirt-engine-4.0.4.4-0.1.el7ev.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
ovirt-engine-sdk-java-3.6.8.0-1.el7ev.noarch
ovirt-engine-restapi-4.0.4.4-0.1.el7ev.noarch

And we are getting the following error:

[root@cpsvrhevmp01 oVirtBackup-master]# ./backup.py -c config.cfg -d
Oct 26 15:27:15: Start backup for: CPSVZABBIXP01
Oct 26 15:27:16: !!! Got a RequestError:
status: 400
reason: Bad Request
detail: preparedstatementcallback; bad sql grammar [select * from getdiskvmelementbydiskvmelementid(?, ?)]; nested exception is org.postgresql.util.psqlexception: the column name logical name was not found in this resultset.
Oct 26 15:27:16: All backups done
Oct 26 15:27:16: Backup failured for:
Oct 26 15:27:16: CPSVZABBIXP01
Oct 26 15:27:16: Some errors occured during the backup, please check the log file
[root@cpsvrhevmp01 oVirtBackup-master]#

At engine.log there are some more messages:

2016-10-26 15:27:15,646 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils](default task-33) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
2016-10-26 15:27:15,680 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand](default task-34) [1f19ef5a] Running command: CreateUserSessionCommand internal: false.
2016-10-26 15:27:16,404 ERROR [org.ovirt.engine.core.bll.storage.disk.GetAllDisksByVmIdQuery](default task-31) [] Query 'GetAllDisksByVmIdQuery' failed: PreparedStatementCallback; bad SQL grammar [select * from getdiskvmelementbydiskvmelementid(?, ?)]; nested exception is org.postgresql.util.PSQLException: The column name logical_name was not found in this ResultSet.
2016-10-26 15:27:16,404 ERROR [org.ovirt.engine.core.bll.storage.disk.GetAllDisksByVmIdQuery](default task-31) [] Exception: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [select * from getdiskvmelementbydiskvmelementid(?, ?)]; nested exception is org.postgresql.util.PSQLException: The column name logical_name was not found in this ResultSet.
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:99) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:645) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:680) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:712) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:762) [spring-jdbc.jar:4.2.4.RELEASE]
at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:154) [dal.jar:]
at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:120) [dal.jar:]
at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.2.4.RELEASE]
at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:147) [dal.jar:]
at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:109) [dal.jar:]
at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeRead(SimpleJdbcCallsHandler.java:101) [dal.jar:]
at org.ovirt.engine.core.dao.DiskVmElementDaoImpl.get(DiskVmElementDaoImpl.java:71) [dal.jar:]
at org.ovirt.engine.core.dao.DiskVmElementDaoImpl.get(DiskVmElementDaoImpl.java:66) [dal.jar:]
at org.ovirt.engine.core.dao.DiskVmElementDaoImpl.get(DiskVmElementDaoImpl.java:19) [dal.jar:]
at org.ovirt.engine.core.bll.storage.disk.GetAllDisksByVmIdQuery.getDiskVmElement(GetAllDisksByVmIdQuery.java:51) [bll.jar:]
at org.ovirt.engine.core.bll.storage.disk.GetAllDisksByVmIdQuery.executeQueryCommand(GetAllDisksByVmIdQuery.java:40) [bll.jar:]
at org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:103) [bll.jar:]
at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) [dal.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:558) [bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:529) [bll.jar:]
at sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source) [:1.8.0_111]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_111]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_111]
at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70) [wildfly-weld-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80) [wildfly-weld-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93) [wildfly-weld-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13) [bll.jar:]
at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source) [:1.8.0_111]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_111]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_111]
at org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437)
at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl.jar:2.3.3.Final-redhat-1]
at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83) [wildfly-weld-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:66) [wildfly-ejb3-7.0.2.GA-redhat-1.jar:7.0.2.GA-redhat-1]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636)
at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356)
at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340)
at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
at org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runQuery(Unknown Source) [common.jar:]
at org.ovirt.engine.api.restapi.resource.BackendResource.runQuery(BackendResource.java:82)
at org.ovirt.engine.api.restapi.resource.BackendResource.getBackendCollection(BackendResource.java:152)
at org.ovirt.engine.api.restapi.resource.AbstractBackendCollectionResource.getBackendCollection(AbstractBackendCollectionResource.java:63)
at org.ovirt.engine.api.restapi.resource.BackendVmDisksResource.list(BackendVmDisksResource.java:54)
at org.ovirt.engine.api.v3.V3Server.adaptList(V3Server.java:149)
at org.ovirt.engine.api.v3.servers.V3VmDisksServer.list(V3VmDisksServer.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_111]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_111]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_111]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_111]
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:138) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:107) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:133) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:107) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:133) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:101) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:402) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:209) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:221) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) [resteasy-jaxrs.jar:3.0.18.Final-redhat-1]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [jboss-servlet-api_3.1_spec.jar:1.0.0.Final-redhat-1]
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:81)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:266)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:201)
at io.undertow.servlet.spec.RequestDispatcherImpl.forwardImpl(RequestDispatcherImpl.java:202)
at io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:109)
at org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:139)
at org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:68)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:84)
at org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:63)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.core.aaa.filters.RestApiSessionMgmtFilter.doFilter(RestApiSessionMgmtFilter.java:78) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.core.aaa.filters.EnforceAuthFilter.doFilter(EnforceAuthFilter.java:39) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.core.aaa.filters.SsoRestApiNegotiationFilter.doFilter(SsoRestApiNegotiationFilter.java:91) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter.doFilter(SsoRestApiAuthFilter.java:47) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.core.aaa.filters.SessionValidationFilter.doFilter(SessionValidationFilter.java:59) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.core.aaa.filters.RestApiSessionValidationFilter.doFilter(RestApiSessionValidationFilter.java:35) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.api.restapi.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:111)
at org.ovirt.engine.api.restapi.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:102)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.ovirt.engine.api.restapi.security.CORSSupportFilter.doFilter(CORSSupportFilter.java:183)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:59)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:285)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:264)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:175)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:802)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_111]
Caused by: org.postgresql.util.PSQLException: The column name logical_name was not found in this ResultSet.
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.findColumn(AbstractJdbc2ResultSet.java:2727)
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getString(AbstractJdbc2ResultSet.java:2567)
at org.jboss.jca.adapters.jdbc.WrappedResultSet.getString(WrappedResultSet.java:1985)
at org.ovirt.engine.core.dao.DiskVmElementDaoImpl$DiskVmElementRowMapper.mapRow(DiskVmElementDaoImpl.java:59) [dal.jar:]
at org.ovirt.engine.core.dao.DiskVmElementDaoImpl$DiskVmElementRowMapper.mapRow(DiskVmElementDaoImpl.java:42) [dal.jar:]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:93) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:60) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:693) [spring-jdbc.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:629) [spring-jdbc.jar:4.2.4.RELEASE]
... 181 more

2016-10-26 15:27:16,420 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource](default task-31) [] Operation Failed: preparedstatementcallback; bad sql grammar [select * from getdiskvmelementbydiskvmelementid(?, ?)]; nested exception is org.postgresql.util.psqlexception: the column name logical name was not found in this resultset.
2016-10-26 15:27:16,517 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils](default task-48) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
2016-10-26 15:27:16,569 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand](default task-47) [4fad70f1] Running command: CreateUserSessionCommand internal: false.
2016-10-26 15:27:16,656 INFO [org.ovirt.engine.core.bll.aaa.LogoutSessionCommand](default task-47) [6aecd414] Running command: LogoutSessionCommand internal: false.
2016-10-26 15:27:16,665 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector](default task-47) [6aecd414] Correlation ID: 6aecd414, Call Stack: null, Custom Event ID: -1, Message: User admin@internal-authz logged out.

The script does not delete the cloned VM

Hello!
Thanks a lot for your script!

I have a problem where the script is ignoring the deletion of the cloned VM.

oVirt vesrion: 4.4.4-7.el8

My config:

[config]
vm_names: ["VM1"]
vm_middle=_BACKUP
snapshot_description=Snapshot for backup script
server=https://ovirt.example.org/ovirt-engine/api
username=svc@internal
password=password
export_domain=vmdisks_backup_01
timeout=10
cluster_name=CLUSTER
datacenter_name=DC01
backup_keep_count=
backup_keep_count_by_number=3
dry_run=False
vm_name_max_length=64
use_short_suffix=False
storage_domain=vmdisks_storage_01
storage_space_threshold=0.1
logger_fmt=%(asctime)s: %(message)s
logger_file_path=
persist_memorystate=False

Command:

# ./backup.py -c config.cfg -d

Logs:

2021-02-07 01:23:38,135: Start backup for: VM1
2021-02-07 01:23:38,184: The storage domain vmdisks_backup_01 is in state active
2021-02-07 01:23:38,255: Search backup snapshots matching Description="Snapshot for backup script"
2021-02-07 01:23:38,358: Snapshot creation started ...
2021-02-07 01:23:39,006: Snapshot operation(creation) in progress ...
2021-02-07 01:23:39,007: Snapshot id=a5d2af3b-0f7e-4729-b678-e0c7b7fb4cf2 status=locked
2021-02-07 01:23:59,647: Snapshot created
2021-02-07 01:24:09,909: Clone into VM (VM1_BACKUP_20210207_012338) started ...
2021-02-07 01:29:25,567: Cloning into VM (VM1_BACKUP_20210207_012338) in progress ...                                                                                                 [309/1056]
2021-02-07 01:33:49,119: Cloning finished
2021-02-07 01:33:49,119: Search backup snapshots matching Description="Snapshot for backup script"
2021-02-07 01:33:49,523: Found backup snapshot to delete. Description: Snapshot for backup script, Created on: 2021-02-07 01:23:38.590000+03:00
2021-02-07 01:33:50,014: Snapshot deletion started ...
2021-02-07 01:33:50,841: Snapshot operation(deletion) in progress ...
2021-02-07 01:33:50,841: Snapshot id=a5d2af3b-0f7e-4729-b678-e0c7b7fb4cf2 status=locked
2021-02-07 01:34:21,637: Snapshots deleted
2021-02-07 01:34:21,638: Looking for old backup to delete matching ^VM1_BACKUP*, keeping max 3 images...
2021-02-07 01:34:23,085: Found 0 old backup images in export_domain.
2021-02-07 01:34:23,121: Export of VM (VM1_BACKUP_20210207_012338) started ...
2021-02-07 01:36:57,040: Exporting finished
2021-02-07 01:36:57,089: Duration: 32:37 minutes
2021-02-07 01:36:57,089: VM exported as VM1_BACKUP_20210207_012338
2021-02-07 01:36:57,090: Backup done for: VM1
2021-02-07 01:36:57,091: All backups done  

And after the script finishes, I see a virtual machine with the _BACKUP <...> prefix in the VM list, as well as its disks in the original storage domain

Please help

Store exceptions and throw exit code != 0

If a exception occurs, continue with the backup but after the job is done return an exit code which is != 0 to catch it with crontab or wrappers.

State: Finished, currently under testing

can't delete temporary snapshot called "Snapshot for backup script"

Hello,
if a new snapshot and clone created to filled up storage, then is not possible delete that snapshot due to " not enought space to delete snapshot" and "snapshot delete" requests stay permanently in cycle.
Therefore , would you change sequence of backup steps to snapshot -> clone -> move clone -> delete orig.clone -> delete snapshot ?
OR
check status of "delete snapshot" step and move this to end of backup steps if such error occured

regs.
Pavel

unsupported operand type(s) for +=: 'int' and 'NoneType'

I got the error "unsupported operand type(s) for +=: 'int' and 'NoneType'" and temporarily fixed it
with vmtools.py

<             vm_size += disk.size

---
>             if disk.size is not None:
>                 vm_size += disk.size

Unfortunately I could not dig deeper into the error and do not know when it happens.

--tag="" Issue

if opts.vm_tag: vms=api.vms.list(max=400, query="tag="+opts.vm_tag) config.set_vm_names([vm.name for vm in vms])

On my box, the max=400 is causing issues with the api.vms.list call. If I remove it, no issues. I'm not sure the reason for max= but on my box it seems to cause issues if it's there.

Bug: The scripts refuse to execute because a machine not included in the config file

oVirt version: oVirt 3.6.7.5-1.el7.centos
OS Version: RHEL - 7 - 2.1511.el7.centos.2.10
Kernel Version: 3.10.0 - 327.28.2.el7.x86_64
KVM Version: 2.3.0 - 31.el7_2.10.1
LIBVIRT Version: libvirt-1.2.17-13.el7_2.5
VDSM Version: vdsm-4.17.32-1.el7
SPICE Version: 0.12.4 - 15.el7_2.1
GlusterFS Version: glusterfs-3.8.1-1.el7
CEPH Version: librbd1-0.80.7-3.el7

Python packages:

python-pip-7.1.0-1.el7.noarch
python-kitchen-1.1.1-5.el7.noarch
ovirt-engine-sdk-python-3.6.7.0-1.el7.centos.noarch
libxml2-python-2.9.1-6.el7_2.3.x86_64
newt-python-0.52.15-4.el7.x86_64
policycoreutils-python-2.2.5-20.el7.x86_64
python-urwid-1.1.1-3.el7.x86_64
python-setproctitle-1.1.6-5.el7.x86_64
python-pyudev-0.15-7.el7_2.1.noarch
python-perf-3.10.0-327.28.2.el7.x86_64
python-devel-2.7.5-38.el7_2.x86_64
python-javapackages-3.4.1-11.el7.noarch
python-slip-0.4.0-2.el7.noarch
python-markdown-2.4.1-1.el7.centos.noarch
python-urlgrabber-3.10-7.el7.noarch
rpm-python-4.11.3-17.el7.x86_64
python-ethtool-0.8-5.el7.x86_64
python-rtslib-2.1.fb57-3.el7.noarch
python-psycopg2-2.5.1-3.el7.x86_64
dbus-python-1.1.1-9.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
python2-ecdsa-0.13-4.el7.noarch
python-IPy-0.75-6.el7.noarch
python-pillow-2.0.0-19.gitd1c6db8.el7.x86_64
python-pycurl-7.19.0-17.el7.x86_64
python-ply-3.4-10.el7.noarch
python2-paramiko-1.16.1-1.el7.noarch
python-setuptools-0.9.8-4.el7.noarch
python-nose-1.3.0-3.el7.noarch
python-cheetah-2.4.4-5.el7.centos.x86_64
python-six-1.9.0-2.el7.noarch
python-kmod-0.9-4.el7.x86_64
python-configshell-1.1.fb18-1.el7.noarch
python-chardet-2.2.1-1.el7_1.noarch
python-libs-2.7.5-38.el7_2.x86_64
audit-libs-python-2.4.1-5.el7.x86_64
python-decorator-3.4.0-3.el7.noarch
python-iniparse-0.4-9.el7.noarch
libsemanage-python-2.1.10-18.el7.x86_64
python2-crypto-2.6.1-9.el7.x86_64
python-daemon-1.6-4.el7.noarch
python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch
python-pygments-1.4-9.el7.noarch
lvm2-python-libs-2.02.130-5.el7_2.5.x86_64
python-2.7.5-38.el7_2.x86_64
python-slip-dbus-0.4.0-2.el7.noarch
python-lockfile-0.9.1-4.el7.centos.noarch
python-backports-1.0-8.el7.x86_64
python-dateutil-1.5-7.el7.noarch
systemd-python-219-19.el7_2.12.x86_64
python-lxml-3.2.1-4.el7.x86_64
python-configobj-4.7.2-7.el7.noarch
cracklib-python-2.9.0-11.el7.x86_64
python-websockify-0.6.0-2.el7.noarch

Description: In the Config File only exists the Ansible_Prod VM but the backup's log says :

Aug 23 07:57:52: Start backup for: Ansible_Prod
Aug 23 07:57:53: !!! Can't delete cloned VM (PerseoVoto_Prod)
Aug 23 07:57:53: !!! Got unexpected exception: 'ascii' codec can't encode character u'\xf1' in position 17: ordinal not in range(128)

Backup doesn't executate.

Thanks in regard

There are many calls of vms.list in the script

It would be nice to aim every query to search for specific vm(s) instead of pooling all the time all set of vms.
In addition there is used max=400 keyword which potentially truncate results which we could be interested in.

I suggest at least these two improvements:

  1. When you look for cloned vms, you can use vms.list(query="name=original_vm*")

  2. when you search for specific vm, just use vms.get("vm_name"), for example when you wait until it is removed. Or for verification whether vm exists.

Backup stops when a VM is removed in the meantime

If you run the backup script for all VMs, the backup fails, if during the period of backing up, a VM is removed.

For me its ok, if that VM is not backed up, but currently the script executions ends with a 'NoneType' exception.

Case:
I have 22 VMs, and started a backup for all of them ( -a ), this takes on my system currently approx. 22 Hours. In the meantime someone else removed a test VM.

Log:

Jan 05 19:03:45: Start backup for: Gluu2-test
Jan 05 19:03:45: !!! Got unexpected exception: 'NoneType' object has no attribute 'snapshots'

Cyrillic symbol in name of snapshot

Warning!
If the name of snapshot contains Cyrillic symbols the script returns an exception.
Others VM in list do not cloned.
An exception occurs when the script executes line № 305:

            # Delete old backup snapshots
            VMTools.delete_snapshots(vm, config, vm_from_list)

Log:
2019-11-22 08:52:24,276: Start backup for: BARS
2019-11-22 08:52:24,863: !!! Got unexpected exception: 'ascii' codec can't encode characters in position 0-5: ordinal not in range(128)

Decision: delete snapshot with Cirillic symbol manually. Or rename.

from russian:
Если в виртуальной машине есть снапшот, который содержит кириллицу, то скрипт вызовет исключение. Виртуальная машина склонирована не будет.
Все остальные ВМ, указанные в списке, не склонируются.

Решение: удалите такой снапшот или переименуйте его.

no logs generated

Hi,
next issue on the same version oVirt is about logs.
We have no logs generated anywhere :-(
running on
oVirt Engine Version: 3.5.2.1-1.el7.centos

any help ??
regs.
Pavel

TypeError: The 'search' parameter should be of type 'str', but it is of type 'unicode'.

I am using version 4.3 script to backup 4.3 ovirt environment but received following error.

Traceback (most recent call last):
File "/opt/ovirtbackup/backup.py", line 464, in
main(sys.argv[1:])
File "/opt/ovirtbackup/backup.py", line 223, in main
if system_service.data_centers_service().list(search='name=%s' % config.get_datacenter_name() )[0] is None:
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 6507, in list
('search', search, str),
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 191, in _check_types
raise TypeError(' '.join(messages))
TypeError: The 'search' parameter should be of type 'str', but it is of type 'unicode'.

Failed backup VM

Failed backup VM using API.

Message Error:

2016-10-25 15:32:34,185 ERROR [org.ovirt.engine.api.restapi.util.LinkHelper](default task-46) [] Can't find relative path for class "org.ovirt.engine.api.resource.VmDisksResource", will return null

multiple VMs don't work

I've got issue with backup more than one VM in config file

$./backup.py -c config-TEST.cfg -d
Jan 21 17:34:17: Start backup for: test2
Jan 21 17:34:39: Snapshot creation started ...
Jan 21 17:34:40: Snapshot operation(creation) in progress ...
Jan 21 17:34:46: Snapshot operation(creation) in progress ...
Jan 21 17:34:51: Snapshot operation(creation) in progress ...
Jan 21 17:34:57: Snapshot created
Jan 21 17:34:58: Clone into VM started ...
Jan 21 17:35:00: Cloning in progress ...
Jan 21 17:35:05: Cloning in progress ...
Jan 21 17:35:22: Cloning in progress ...
Jan 21 17:35:27: Cloning finished
Jan 21 17:35:28: Found snapshots(1):
Jan 21 17:35:28: Snapshots description: Snapshot for backup script, Created on: 2016-01-21 17:34:39.696000+01:00
Jan 21 17:35:28: Snapshot deletion started ...
Jan 21 17:35:29: Snapshot operation(deletion) in progress ...
Jan 21 17:37:22: Snapshot operation(deletion) in progress ...
Jan 21 17:37:27: Snapshots deleted
Jan 21 17:37:29: Export started ...
Jan 21 17:37:31: Exporting in progress ...
Jan 21 17:37:59: Exporting in progress ...
Jan 21 17:38:04: Exporting finished
Jan 21 17:38:26: Delete cloned VM started ...
Jan 21 17:38:48: Cloned VM deleted
Jan 21 17:38:48: Duration: 4:32 minutes
Jan 21 17:38:48: VM exported as test2_BACKUP_1453394056
Jan 21 17:38:48: Backup done for: test2
Jan 21 17:38:48: All backups done
Jan 21 17:38:48: Backup failured for:
Jan 21 17:38:48: test1

$ cat config-TEST.cfg
[config]
vm_names: ["test2","test1"]
vm_middle=_BACKUP
snapshot_description=Snapshot for backup script
server=https://ovirt
username=admin@internal
password=
export_domain=BACKUP-export
timeout=5
cluster_name=KVM-prod
backup_keep_count=21
dry_run=False

running on
GUI - oVirt Engine Version: 3.5.2.1-1.el7.centos
ovirt-engine-sdk-python-3.5.5.0-1.el7.centos.noarch
python-2.7.5-18.el7_1.1.x86_64

regs. Pavel

Disk selection for snapshot

It would be nice to select disk for snapshot to be exported ( if there ar multiple disk in VM).
I have couple of VMs with large second disk for data witch i am backing up with another solution, and just want to backup the os disk for easier recovery.

Backup fails completely if VM missing

I recently deleted a VM but neglected to remove it from the list in config.cfg. This resulted in oVirtBackup bombing out with an error message in the log and no VM backups taken. This is highly undesirable behaviour and I believe a more appropriate response would be to log the error but continue to backup the remainder of the listed VMs. Please consider making the change.

Error message "400 / Bad Request / Internal Engine Error" when using backup.py on some VMs

When using backup.py on some(!) VMs, we get a "Bad Request / Internal Engine Error" during the cloning step:

$ ./backup.py -c ./config_testvm.cfg -d
Apr 12 10:50:05: Start backup for: testvm
Apr 12 10:50:24: Snapshot creation started ...
Apr 12 10:50:32: Snapshot operation(creation) in progress ...
Apr 12 10:50:41: Snapshot operation(creation) in progress ...
Apr 12 10:50:51: Snapshot operation(creation) in progress ...
Apr 12 10:51:00: Snapshot created
Apr 12 10:51:04: Clone into VM started ...
Apr 12 10:51:07: !!! Got a RequestError:
status: 400
reason: Bad Request
detail: Internal Engine Error
Apr 12 10:51:07: All backups done
Apr 12 10:51:07: Backup failured for:
Apr 12 10:51:07:   testvm
Apr 12 10:51:07: Some errors occured during the backup, please check the 
log file
$

The used config file looks as follows:

$ cat config_testvm.cfg
[config]
vm_names: ["testvm"]
vm_middle=_BACKUP
snapshot_description=Snapshot for backup script
server=https://our.ovirt.server/ovirt-engine/api
username=xxxx@xxxx
password=xxxx
export_domain=our_export_domain
timeout=5
cluster_name=Default
backup_keep_count=3
dry_run=False
vm_name_max_length=32
storage_domain=our_storage_domain
storage_space_threshold=0.1
$

The same script with only the VM name changed in the configuration file works fine on other VMs. When running the process for the failing VM manually through oVirt's admin webgui, then no error appears.

There is no obvious configuration difference between working and failing VMs. The backup script is run directly on the oVirt server (our.ovirt.server).

System and oVirt version information:

$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

$ rpm -qa | grep ovirt
ovirt-engine-restapi-3.5.6.2-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.2.8-1.el7.centos.noarch
ovirt-release35-006-1.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.5.6.2-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-3.5.6.2-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.noarch
ovirt-image-uploader-3.5.1-1.el7.centos.noarch
ovirt-engine-backend-3.5.6.2-1.el7.centos.noarch
ovirt-engine-userportal-3.5.6.2-1.el7.centos.noarch
ovirt-engine-setup-base-3.5.6.2-1.el7.centos.noarch
ovirt-engine-jboss-as-7.1.1-1.el7.x86_64
ovirt-engine-setup-3.5.6.2-1.el7.centos.noarch
ovirt-engine-dbscripts-3.5.6.2-1.el7.centos.noarch
ovirt-engine-cli-3.5.0.6-1.el7.centos.noarch
ovirt-engine-3.5.6.2-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.6.2-1.el7.centos.noarch
ovirt-host-deploy-1.3.2-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.5.6.2-1.el7.centos.noarch
ovirt-engine-tools-3.5.6.2-1.el7.centos.noarch
ovirt-iso-uploader-3.5.2-1.el7.centos.noarch
ovirt-host-deploy-java-1.3.2-1.el7.centos.noarch
ovirt-engine-webadmin-portal-3.5.6.2-1.el7.centos.noarch
libgovirt-0.3.3-1.el7.x86_64
ovirt-hosted-engine-setup-1.2.6.1-1.el7.centos.noarch
ovirt-engine-lib-3.5.6.2-1.el7.centos.noarch
ovirt-engine-websocket-proxy-3.5.6.2-1.el7.centos.noarch
$

Is there a way to further debug this kind of problem? Like a -vvv option? Looking at the ovirt-engine log on the server doesn't really help, as the relevant messages are mixed with hundreds of other messages being printed during normal operation of the oVirt engine. In the concrete case it would e.g. be helpful to track down the concrete request (with data), which is failing. We are currently using commit 394a8743fc59a9bb7b4a131b133aa992fb3a1f30 of oVirtBackup

$ git rev-parse HEAD
394a8743fc59a9bb7b4a131b133aa992fb3a1f30
$

Thanks in advance
Frank Thommen

Feature Request: Support backup of VM's with Delete Protection

It would be awesome if it was possible to backup VM's which have 'Delete Protection' enabled.

Feb 18 15:37:37: Start backup for: t-c7-comp
Feb 18 15:37:42: Snapshot creation started ...
Feb 18 15:38:43: Snapshot created
Feb 18 15:38:44: Clone into VM started ...
Feb 18 15:40:47: Cloning finished
Feb 18 15:40:48: Snapshot deletion started ...
Feb 18 15:41:48: Snapshots deleted
Feb 18 15:41:50: Export started ...
Feb 18 15:42:51: Exporting finished
Feb 18 15:42:56: Delete cloned VM started ...
Feb 18 15:42:57: !!! Can't delete cloned VM (t-c7-comp_BKUP_20160218_153737)
Feb 18 15:42:57: !!! Got a RequestError:
status: 409
reason: Conflict
detail: Cannot remove VM. Delete protection is enabled. In order to delete, disable Delete protection first.
Feb 18 15:42:57: All backups done
Feb 18 15:42:57: Backup failured for:
Feb 18 15:42:57: t-c7-comp
Feb 18 15:42:57: Some errors occured during the backup, please check the log file

I would hazard a guess that if delete protection was removed after VM was cloned from snapshot this would work correctly. I can't find much mention about how to do this, although it appears possible, search for "added deletion protection to the template/vm via .delete_protected" here: http://www.ovirt.org/develop/sdk/python-sdk-changelog/

Feature wish: Define config parameters on the commandline or complete config via stdin

It would be nice, if

  1. configuration parameters could be defined on the commandline instead of in the configuration file and if commandline parameters would overwrite configfile settings
  2. the configuration file could be given via stdin

this would come in very handy for automation and dynamic configuration settings. E.g. in setups where many VMs have to be backed up in regular intervals - but never all together -, it is not practicable to create a config file for each single VM. 90% of the configuration would be redundant and would have to be changed in all configuration files if e.g. the name of the storage domain changes.

@ 1.:
this (by itself incomplete) configuration file:

[config]
# vm_names: define them on the commandline
vm_middle=_BACKUP
snapshot_description=Snapshot for backup script
server=https://my.ovirt.server/ovirt-engine/api
username=my_username
password=my_admin_password
export_domain=my_storage_domain
timeout=5
cluster_name=my_clustername
backup_keep_count=3
dry_run=False
vm_name_max_length=32
storage_domain=my_storage_domain
storage_space_threshold=0.1
use_short_suffix=false

could be run with in dry mode with:
backup.py -d -c ./global-backupsettings.conf --vm_names=vm1,vm2,vm3 --dry_run=true

(writing this I notice, that the vm_names directive has a different syntax from all other settings. Wouldn't it be nice to have the same syntax everywhere? I.e. vm_name = vm1, vm2, vm3 instead of vm_names: ["vm1", "vm2", "vm3"]?)

@ 2.
As alternative (or additionally), the configuration file could be given via stdin. E.g. using above example:

(echo 'vm_names: ["vm1", "vm2", "vm3"]'; \
 cat  ./global-backupsettings.conf;      \
 echo "dry_run=True") | ./backup.py -d

Not as nice as the commandline options but still usable for automation and dynamic configurations :-)

Cheers
Frank

vm_middle= deletes original VM

Let me lead with GREAT script and thanks for sharing! I have found a way that a idiot like myself can use the script to delete all my original VMs. If you leave vm_middle blank it will try to delete the originals. I was thinking that leaving it blank would make the names shorter... thus less errors in the future? My mistake...

To help make it a little more dummy proof maybe you can throw a if statement in there to look for a blank variable and quit safely? Thanks again for all the hard work you have done on this.

Old backup images not being cleaned up

Im having a problem with oVirtBackup not cleaning up old backups as specified by the backup_keep_count option in the config file. The old backups are not removed, nor do I see a "Backup deletion started for backup:" message in the log..

With commit 1e3b29e (March24) this functionality works as described - but with the latest release (a4a74d5) it does not.
Based on some manual testing with old commits, I believe this broken functionality was introduced with commit 6563e92 (March30).

I fixed this for me by changing lines 144,145 of vmtools.py (method delete_old_backups) to be like in this diff:
master...MrFishFinger:master

I believe the problem stems from the fact that api.storagedomains.myStorageDomain.vms.list(query=vm_search_regexp) does not seem to work correctly and returns an empty list.
However api.vms.list(query=vm_search_regexp) does seem to work (as seen in the method delete_vm).

I am using an older version of oVirt (3.6) - so maybe that is causing the problem? I am also not an expert with Python by anymeans...

ovirt-engine-sdk-python.noarch - 3.6.9.1-1.el7
ovirt-release36.noarch - 1:3.6.7-1

let me know if I should submit this patch as a PR or if this is just an issue localised to me :)

Command Line Switch to Backup All VMs Not Working With oVirt 4.0.5

When I use the "-a" switch to backup all VMs, as follows...
python /home/backup/oVirtBackup/backup.py -c config.cfg -a -d
...I get this error:
Nov 18 14:35:50: Unexpected error getting list of vms: [Errno 18] Invalid cross-device link
The backup continues to run for all the VMs in config.cfg, but does not backup all VMs, as expected with this switch.

Snapshot not deleted at error

Hi,
If an error occours for some reason, and snapshot already has beed created for a VM, it is not deleted before going to the next VM.
br,
Peter C

!!! No snapshot found !!!

2019-08-13 01:47:03,583: Start backup for: 36650
2019-08-13 01:47:03,829: Snapshot creation started ...
2019-08-13 01:47:03,829: Snapshot created
2019-08-13 01:47:14,042: !!! No snapshot found !!!
2019-08-13 01:47:14,042: All backups done
2019-08-13 01:47:14,042: Backup failured for:
2019-08-13 01:47:14,042: 36650
2019-08-13 01:47:14,042: Some errors occured during the backup, please check the log file

Error in VM nameresolution on delete cloned-vm

There is a error on resolving the name of the cloned vm to get deleted.

If you name a vm "Test" and the second "Test2" the nameresolution might fail if you try to backup "Test".
This is because the "" on line 84 in vmtools.py:
vm_search_regexp = ("name=%s%s
" % (vm_name, config.get_vm_middle()))
and result in the following log:

2020-01-26 00:06:15,490: Start backup for: Test
2020-01-26 00:06:15,541: Delete cloned VM (Test2) started ...
2020-01-26 00:06:16,956: !!! Can't delete cloned VM (Test2)

I've edited the line to:
vm_search_regexp = ("name=%s%s__*" % (vm_name, config.get_vm_middle()))
and now running a test.

Logging add timestamp

Add a time-stamp to stdout.

Concept: Create a Logger class and move the "print" lines from the other classes

too long filename

Hello,
next issue with backup script

./backup.py -c config-TUTYSERVIS.cfg
Jan 21 20:28:30: Start backup for: TUTYSERVIS
Jan 21 20:28:53: Snapshot creation started ...
Jan 21 20:29:19: Snapshot created
Jan 21 20:29:21: Clone into VM started ...
Jan 21 20:29:22: !!! Got a RequestError:
status: 400
reason: Bad Request
detail: Cannot add VM. The given name is too long.
Jan 21 20:29:22: All backups done
Jan 21 20:29:22: Backup failured for:
Jan 21 20:29:22: TUTYSERVIS
Jan 21 20:29:22: Some errors occured during the backup, please check the log file

We have two posilbilities to solve this

  1. skip middle part of generated new name ( not recommended - will solve a few names only, we have very long onces up to 16 ( sixteen ) characters )
  2. make shorter time identification (or skip it ) in suffix part of new name

running on
GUI - oVirt Engine Version: 3.5.2.1-1.el7.centos
ovirt-engine-sdk-python-3.5.5.0-1.el7.centos.noarch
python-2.7.5-18.el7_1.1.x86_64

regs. Pavel

Make the clone disks into another storage domain possible?

Hello,
I see that I can only select cluster for destination of th eclone of the snapshot.
In my case I have only one cluster composed by two hosts, bu I have a dedicated storage domain where I would like to put the cloned disks, is it possible?
At the moment I have not enough space on source storage domain to contain the cloned disks...
Thanks,
Gianluca

Found another exception for VM ... Entity not found: null

Hello, wefixit-AT!

Just installed the latest version of the script from git. After that backup is broken
oVirt Engine Version: 4.1.1.6-1.el7.centos

My config:

egrep -v "^[[:space:]]*#|^$" /usr/local/sbin/ovirt-vm-backup/config_test.cfg

[config]
vm_names: ["KOM-AD01-APP31"]
vm_middle=_BACKUP
snapshot_description=Snapshot for backup script
server=https://kom-ad01-ovirt1.holding.com/ovirt-engine/api
username=admin@internal
password=Passw0rd
export_domain=NFS-VM-BACKUP
timeout=5
cluster_name=Default
backup_keep_count=20
dry_run=False
vm_name_max_length=64
use_short_suffix=False
storage_domain=3PAR-VOLUME2
storage_space_threshold=0.1
persist_memorystate=False

Start backup:

/usr/local/sbin/ovirt-vm-backup/backup.py -c /usr/local/sbin/ovirt-vm-backup/config_test.cfg -d

2017-04-12 14:44:08,634: Start backup for: KOM-AD01-APP31
2017-04-12 14:44:08,913: Snapshot creation started ...
2017-04-12 14:44:11,454: Snapshot operation(creation) in progress ...
...<output truncated>...
2017-04-12 14:44:27,089: Snapshot operation(creation) in progress ...
2017-04-12 14:44:32,294: Snapshot created
2017-04-12 14:44:42,490: Clone into VM (KOM-AD01-APP31_BACKUP_20170412_144408) started ...
2017-04-12 14:44:45,907: Cloning in progress (VM KOM-AD01-APP31_BACKUP_20170412_144408 status is 'image_locked') ...
...<output truncated>...
2017-04-12 14:47:13,767: Cloning in progress (VM KOM-AD01-APP31_BACKUP_20170412_144408 status is 'image_locked') ...
2017-04-12 14:47:18,883: Cloning finished
2017-04-12 14:47:19,053: Found snapshots(1):
2017-04-12 14:47:19,054: Snapshots description: Snapshot for backup script, Created on: 2017-04-12 14:44:11.211000+03:00
2017-04-12 14:47:19,358: Snapshot deletion started ...
2017-04-12 14:47:19,521: Snapshot operation(deletion) in progress ...
...<output truncated>...
2017-04-12 14:48:11,533: Snapshot operation(deletion) in progress ...
2017-04-12 14:48:16,572:   !!! Found another exception for VM: KOM-AD01-APP31
2017-04-12 14:48:16,573:   DEBUG:
status: 404
reason: Not Found
detail: Entity not found: null

cat /var/log/ovirt-engine/engine.log

2017-04-12 14:48:15,477+03 INFO  [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler2) [48da32fe] Command 'RemoveSnapshot' id: '8666cb12-28e4-4970-a467-bf074c1da2ed' child commands '[1b1482fd-67c7-4db9-a583-c5b72ae5585b]' executions were completed, status 'SUCCEEDED'
2017-04-12 14:48:16,553+03 INFO  [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (DefaultQuartzScheduler7) [48da32fe] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' successfully.
2017-04-12 14:48:16,570+03 WARN  [org.ovirt.engine.core.bll.GetVmConfigurationBySnapshotQuery] (default task-4) [6470f9ca-54be-479f-ae16-26ac1bce2e5d] Snapshot 'cb18c470-4847-4f75-aecb-f0e442e8c825' does not exist
2017-04-12 14:48:16,571+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-4) [] Operation Failed: Entity not found: null
2017-04-12 14:48:16,600+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler7) [48da32fe] EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_SUCCESS(356), Correlation ID: 37697493-b01c-4fda-a11b-5e9d44adc8fa, Job ID: aedb64ca-1cfa-4251-a211-a43f335385d2, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Snapshot for backup script' deletion for VM 'KOM-AD01-APP31' has been completed.

Please tell me how can I fix this problem.

Special naming conventions for Windows VM's

Because of the VM name limitation found in #6 , a Windows VM can have only 15 characters which can lead to problems with the current backup solution. The current solution adds to the cloned VM, which is exported afterwards, two strings after the VM name. So the actual VM name has to be very very short so it can be backed up.

On one of my hosts I wrote a shorter prefix so the VM can be backed up. I will upload this later.

oVirtBackup stop working after update to oVirt 4.2.7

ovirt-engine-extensions-tool aaa login-user --profile=domain --user-name=backupuser

works as before
but ovirt backup can't authorized

Traceback (most recent call last):
File "./backup.py", line 404, in
main(sys.argv[1:])
File "./backup.py", line 212, in main
connect()
File "./backup.py", line 400, in connect
debug=False
File "/home/pontostroy/.local/lib/python2.7/site-packages/ovirtsdk/api.py", line 191, in init
url=''
File "/home/pontostroy/.local/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 122, in request
persistent_auth=self.__persistent_auth
File "/home/pontostroy/.local/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 79, in do_request
persistent_auth)
File "/home/pontostroy/.local/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 162, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
ovirtsdk.infrastructure.errors.RequestError:
status: 401
reason: Unauthorized
detail:

<title>Error</title>Unauthorized

Unicode Error thrown when backing up second machine in list

Hello
since Feb 15th '18 the second of my VMs in the backup list throws an error every time oVirtBackup tries to back it up:
"!!! Got unexpected exception: 'ascii' codec can't encode character u'\xfc' in position 25: ordinal not in range(128)"

Some research led me to this:
https://docs.python.org/2.7/howto/unicode.html#the-unicode-type

I think this error was only thrown after I upgraded oVirt to 4.2.1, as it started shortly after 4.2.1 was released on Feb 12th '18. And btw, bacchus isn't able to backup the same VM either.

Maybe this happens because the oVirt API received some changes? See: #47

Thanks for a hint.

EDIT: I executed all of the steps manually over the oVirt Webui (snapshot, clone, export and delete all items) for the same VM and this works without any problems.

Imported VM deleted in backup rotation

image

Tarsus is the name of VM imported from backup created by this script.

If the VM is imported in the same cluster (for instance when there's a misconfiguration so they had to be restored from the backup in the same cluster) the VM is deleted in the rotation process. Is this intentional or there are any configuration I should use to only delete the rotated backup (prefixed with date) but not the imported-from-backup-vm ?

I tried to change the name, collapse snapshot and/or clone but it still gets deleted. Any help will be appreciated.
Thank you!

Selecting a disk for backup

Hey! Is it possible to back up a specific disk of a virtual machine. There is a virtual machine on which the second disk is very large and its data is backed up by other software.

Create VM from snapshot

It is necessary to increase timeout between "snapshot successfully created" and start to clone VM. I set 180

oVirt 4.0.4 - Can't find relative path for class "org.ovirt.engine.api.resource.VmDisksResource", will return null

Hello, wefixit-AT !

ovirt-engine-4.0.4.4-1.el7.centos
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos

Trying to backup virtual machine:

[root@MY-OVIRT1 ~]# /usr/local/sbin/ovirt-vm-backup/backup.py -c /usr/local/sbin/ovirt-vm-backup/config.cfg -d
Oct 05 17:03:41: Start backup for: MY-VM31
Oct 05 17:03:42: Snapshot creation started ...
Oct 05 17:03:42: Snapshot created
Oct 05 17:03:52: !!! No snapshot found
Oct 05 17:03:52: All backups done
Oct 05 17:03:52: Backup failured for:
Oct 05 17:03:52: MY-VM31
Oct 05 17:03:52: Some errors occured during the backup, please check the log file

/var/log/ovirt-engine/engine.log:

2016-10-05 17:03:41,573 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils](default task-31) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
2016-10-05 17:03:41,605 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand](default task-2) [586bb42d] Running command: CreateUserSessionCommand internal: false.
2016-10-05 17:03:42,080 ERROR [org.ovirt.engine.api.restapi.util.LinkHelper](default task-29) [] Can't find relative path for class "org.ovirt.engine.api.resource.VmDisksResource", will return null
2016-10-05 17:03:42,080 ERROR [org.ovirt.engine.api.restapi.util.LinkHelper](default task-29) [] Can't find relative path for class "org.ovirt.engine.api.resource.VmDisksResource", will return null
2016-10-05 17:03:52,257 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils](default task-15) [] User admin@internal successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
2016-10-05 17:03:52,289 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand](default task-9) [482957db] Running command: CreateUserSessionCommand internal: false.
2016-10-05 17:03:52,321 INFO [org.ovirt.engine.core.bll.aaa.LogoutSessionCommand](default task-9) [4ac0e9d3] Running command: LogoutSessionCommand internal: false.
2016-10-05 17:03:52,332 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector](default task-9) [4ac0e9d3] Correlation ID: 4ac0e9d3, Call Stack: null, Custom Event ID: -1, Message: User admin@internal-authz logged out.

What could be the problem?

Export only a VM disk

Today I had some troubles (found a bug in oVirt). Then I came to the idea to export only a selected disk to another location to split system and data files better. With the current solution all disks (system and data) will be copied, which could take a really long time for big data disks. The data disk should anyway be backed up with a solution which supports incremental or differential backups.

Export VM directly from a snapshot

With the current oVirt SDK its only possible to export a VM to the backup storage, there the following workflow is needed.

Create snapshot -> Clone into VM -> Delete Snapshot -> Export cloned VM -> Delete cloned VM
Disadvantage: The storage of the VM is needed because it has to be cloned to export it.

If the SDK support direct export from a snapshot this will give a huge performance impact.

State: Request sent to oVirt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.